id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1704.00109#24 | Snapshot Ensembles: Train 1, get M for free | In some applications, it may be beneï¬ cial to vary the size of the ensemble dynamically at test time depending on available resources. Figure 3 displays the performance of DenseNet-40 on the CIFAR-100 dataset as the effective ensemble size, m, is varied. Each en- semble consists of snapshots from later cycles, as these snapshots have received the most training and therefore have likely converged to bet- ter minima. Although ensembling more models generally gives better performance, we observe signiï¬ cant drops in error when the second and third models are added to the ensemble. In most cases, an ensemble of two models outperforms the baseline model. Restart Learning Rate. The effect of the restart learning rate can be observed in Figure 3. The left two plots show performance when using a restart learning rate of α0 = 0.1 at the beginning of each cycle, and the right two plots show α0 = 0.2. In most cases, ensembles with the larger restart learning rate perform better, presumably because the strong perturbation in between cycles increases the diversity of local minima. Varying Number of Cycles. | 1704.00109#23 | 1704.00109#25 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#25 | Snapshot Ensembles: Train 1, get M for free | Given a ï¬ xed training budget, there is a trade-off between the number of learning rate cycles and their length. Therefore, we investigate how the number of cycles M affects the ensemble performance, given a ï¬ xed training budget. We train a 40-layer DenseNet on the CIFAR-100 dataset with an initial learning rate of α0 = 0.2. We ï¬ x the total training budget B = 300 epochs, and vary the value of M â | 1704.00109#24 | 1704.00109#26 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#26 | Snapshot Ensembles: Train 1, get M for free | {2, 4, 6, 8, 10}. As shown in Table 3, our method is relatively robust with respect to different values of M . At the extremes, M = 2 and M = 10, we ï¬ nd a slight degradation in performance, as the cycles are either too few or too short. In practice, we ï¬ nd that setting M to be 4 â ¼ 8 works reasonably well. Varying Training Budget. The left and middle panels of Figure 4 show the performance of Snap- shot Ensembles and SingleCycle Ensembles as a function of training budget (where the number of cycles is ï¬ xed at M = 6). We train a 40-layer DenseNet on CIFAR-10 and CIFAR-100, with an ini- tial learning rate of α0 = 0.1, varying the total number of training epochs from 60 to 300. We observe M Test Error (%) 2 4 6 8 10 22.92 22.07 21.93 21.89 22.16 7 | 1704.00109#25 | 1704.00109#27 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#27 | Snapshot Ensembles: Train 1, get M for free | Published as a conference paper at ICLR 2017 Cifar10, DenseNet-40 Cifar100, DenseNet-40 Cifarl100, DenseNet-40 + Snapshot Ensemble â +â Snapshot Ensemble 10 + SingleCycle Ensemble 36 â â SingleCycle Ensemble g = = Single Model Snapshot ensemble ~ (60 epochs per model cost) True ensemble of fully trained models, (300 epochs per model cost) Ensemble Test Error (%) Ensemble Test Error (%) Ensemble Test Error (%) 100 150 200 250 300 100 150 200 250 300 1 2 3 4 Training budget B (epochs) Training budget # (epochs) 5 6 # of models Figure 4: Snapshot Ensembles under different training budgets on (Left) CIFAR-10 and (Middle) CIFAR-100. Right: Comparison of Snapshot Ensembles with true ensembles. we Cifar10 (cosine annealing) so Cifar100 (cosine annealing) Cifar10 (standard Ir scheduling) Cifar100 (standard Ir scheduling) â with 5th snapshot â with 4-08 snapshot â with a-rd snapshot â with 2-nd snapshot â with 1-st snapshot th 5-th snapshot ith 4-th snapshot ith 3-rd snapshot ith 2-nd snapshot ith I-st snapshot. Figure 5: Interpolations in parameter space between the ï¬ nal model (sixth snapshot) and all intermediate snapshots. λ = 0 represents an intermediate snapshot model, while λ = 1 represents the ï¬ nal model. Left: A Snapshot Ensemble, with cosine annealing cycles (α0 = 0.2 every B/M = 50 epochs). Right: A NoCycle Snapshot Ensemble, (two learning rate drops, snapshots every 50 epochs). that both Snapshot Ensembles and SingleCycle Ensembles become more accurate as training bud- get increases. However, we note that as training budget decreases, Snapshot Ensembles still yield competitive results, while the performance of the SingleCycle Ensembles degrades rapidly. These results highlight the improvements that Snapshot Ensembles obtain when the budget is low. If the budget is high, then the SingleCycle baseline approaches true ensembles and outperforms Snapshot ensembles eventually. Comparison with True Ensembles. We compare Snapshot Ensembles with the traditional ensem- bling method. | 1704.00109#26 | 1704.00109#28 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#28 | Snapshot Ensembles: Train 1, get M for free | The right panel of Figure 4 shows the test error rates of DenseNet-40 on CIFAR-100. The true ensemble method averages models that are trained with 300 full epochs, each with differ- ent weight initializations. Given the same number of models at test time, the error rate of the true ensemble can be seen as a lower bound of our method. Our method achieves performance that is comparable with ensembling of 2 independent models, but with the training cost of one model. # 4.4 DIVERSITY OF MODEL ENSEMBLES Parameter Space. We hypothesize that the cyclic learning rate schedule creates snapshots which are not only accurate but also diverse with respect to model predictions. We qualitatively measure this diversity by visualizing the local minima that models converge to. To do so, we linearly interpolate snapshot models, as described by Goodfellow et al. (2014). Let J (θ) be the test error of a model using parameters θ. Given θ1 and θ2 â the parameters from models 1 and 2 respectively â we can compute the loss for a convex combination of model parameters: J (λ (θ1) + (1 â λ) (θ2)), where λ is a mixing coefï¬ cient. Setting λ to 1 results in a parameters that are entirely θ1 while setting λ to 0 gives the parameters θ2. By sweeping the values of λ, we can examine a linear slice of the parameter space. Two models that converge to a similar minimum will have smooth parameter interpolations, whereas models that converge to different minima will likely have a non-convex interpolation, with a spike in error when λ is between 0 and 1. Figure 5 displays interpolations between the ï¬ nal model of DenseNet-40 (sixth snapshot) and all intermediate snapshots. The left two plots show Snapshot Ensemble models trained with a cyclic learning rate, while the right two plots show NoCycle Snapshot models. λ = 0 represents a model which is entirely snapshot parameters, while λ = 1 represents a model which is entirely the param- eters of the ï¬ nal model. From this ï¬ gure, it is clear that there are differences between cyclic and | 1704.00109#27 | 1704.00109#29 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#29 | Snapshot Ensembles: Train 1, get M for free | 8 Published as a conference paper at ICLR 2017 non-cyclic learning rate schedules. Firstly, all of the cyclic snapshots achieve roughly the same error as the ï¬ nal cyclical model, as the error is similar for λ = 0 and λ = 1. Additionally, it appears that most snapshots do not lie in the same minimum as the ï¬ nal model. Thus the snapshots are likely to misclassify different samples. Conversely, the ï¬ rst three snapshots achieve much higher error than the ï¬ nal model. This can be observed by the sharp minima around λ = 1, which suggests that mixing in any amount of the snapshot parameters will worsen performance. While the ï¬ nal two snapshots achieve low error, the ï¬ gures suggests that they lie in the same minimum as the ï¬ nal model, and therefore likely add limited diversity to the ensemble. | 1704.00109#28 | 1704.00109#30 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#30 | Snapshot Ensembles: Train 1, get M for free | Activation space. To further explore the diver- sity of models, we compute the pairwise corre- lation of softmax outputs for every pair of snap- shots. Figure 6 displays the average correla- tion for both cyclic snapshots and non-cyclical snapshots. Firstly, there are large correlations between the last 3 snapshots of the non-cyclic training schedule (right). These snapshots are taken after dropping the learning rate, suggest- ing that each snapshot has converged to the same minimum. Though there is more diversity amongst the earlier snapshots, these snapshots have much higher error rates and are therefore not ideal for ensembling. Conversely, there is less correlation between all cyclic snapshots (left). Because all snapshots have similar accu- racy (as can be seen in Figure 5), these differ- ences in predictions can be exploited to create effective ensembles. | 1704.00109#29 | 1704.00109#31 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#31 | Snapshot Ensembles: Train 1, get M for free | # 5 DISCUSSION We introduce Snapshot Ensembling, a simple method to obtain ensembles of neural networks with- out any additional training cost. Our method exploits the ability of SGD to converge to and escape from local minima as the learning rate is lowered, which allows the model to visit several weight assignments that lead to increasingly accurate predictions over the course of training. We harness this power with the cyclical learning rate schedule proposed by Loshchilov & Hutter (2016), saving model snapshots at each point of convergence. We show in several experiments that all snapshots are accurate, yet produce different predictions from one another, and therefore are well suited for test-time ensembles. Ensembles of these snapshots signiï¬ cantly improve the state-of-the-art on CIFAR-10, CIFAR-100 and SVHN. Future work will explore combining Snapshot Ensembles with traditional ensembles. In particular, we will investigate how to balance growing an ensemble with new models (with random initializations) and reï¬ ning existing models with further training cycles under a ï¬ xed training budget. | 1704.00109#30 | 1704.00109#32 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#32 | Snapshot Ensembles: Train 1, get M for free | # ACKNOWLEDGEMENTS We thank Ilya Loshchilov and Frank Hutter for their insightful comments on the cyclic cosine- shaped learning rate. The authors are supported in part by the, III-1618134, III-1526012, IIS- 1149882 grants from the National Science Foundation, US Army Research Ofï¬ ce W911NF-14- 1-0477, and the Bill and Melinda Gates Foundation. # REFERENCES L´eon Bottou. | 1704.00109#31 | 1704.00109#33 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#33 | Snapshot Ensembles: Train 1, get M for free | Large-scale machine learning with stochastic gradient descent. In COMPSTAT. 2010. Cristian Bucilu, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In KDD, 2006. Rich Caruana, Alexandru Niculescu-Mizil, Geoff Crew, and Alex Ksikes. Ensemble selection from libraries of models. In ICML, 2004. 9 Published as a conference paper at ICLR 2017 Ronan Collobert, Koray Kavukcuoglu, and Cl´ement Farabet. Torch7: | 1704.00109#32 | 1704.00109#34 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#34 | Snapshot Ensembles: Train 1, get M for free | A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, 2011. Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op- timization. In NIPS, 2014. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121â 2159, 2011. Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. In ICML, 2013. Ian J Goodfellow, Oriol Vinyals, and Andrew M Saxe. Qualitatively characterizing neural network optimization problems. arXiv preprint arXiv:1412.6544, 2014. | 1704.00109#33 | 1704.00109#35 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#35 | Snapshot Ensembles: Train 1, get M for free | Lars Kai Hansen and Peter Salamon. Neural network ensembles. IEEE transactions on pattern analysis and machine intelligence, 12:993â 1001, 1990. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In CVPR, 2016a. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In ECCV, 2016b. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. Gao Huang, Zhuang Liu, and Kilian Q Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016a. Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochas- tic depth. In ECCV, 2016b. Sergey Ioffe and Christian Szegedy. | 1704.00109#34 | 1704.00109#36 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#36 | Snapshot Ensembles: Train 1, get M for free | Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICCV, 2015. Sbastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large target vocabulary for neural machine translation. arXiv preprint arXiv:1412.2007, 2014. Kenji Kawaguchi. Deep learning without poor local minima. arXiv preprint arXiv:1605.07110, 2016. Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Pe- ter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016. Diederik Kingma and Jimmy Ba. Adam: | 1704.00109#35 | 1704.00109#37 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#37 | Snapshot Ensembles: Train 1, get M for free | A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬ cation with deep convo- lutional neural networks. In NIPS, 2012. Anders Krogh, Jesper Vedelsby, et al. Neural network ensembles, cross validation, and active learn- ing. In NIPS, volume 7, 1995. David Krueger, Tegan Maharaj, J´anos Kram´ar, Mohammad Pezeshki, Nicolas Ballas, Nan Rose- mary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. | 1704.00109#36 | 1704.00109#38 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#38 | Snapshot Ensembles: Train 1, get M for free | Zoneout: Regularizing rnns by randomly preserving hidden activations. arXiv preprint arXiv:1606.01305, 2016. 10 Published as a conference paper at ICLR 2017 Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016. Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural net- works without residuals. arXiv preprint arXiv:1605.07648, 2016. | 1704.00109#37 | 1704.00109#39 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#39 | Snapshot Ensembles: Train 1, get M for free | Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply- supervised nets. In AISTATS, 2015. Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013. Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with restarts. arXiv preprint arXiv:1608.03983, 2016. | 1704.00109#38 | 1704.00109#40 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#40 | Snapshot Ensembles: Train 1, get M for free | Mohammad Moghimi, Mohammad Saberian, Jian Yang, Li-Jia Li, Nuno Vasconcelos, and Serge Belongie. Boosted convolutional neural networks. 2016. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning, 2011. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. | 1704.00109#39 | 1704.00109#41 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#41 | Snapshot Ensembles: Train 1, get M for free | Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014. Rico Sennrich, Barry Haddow, and Alexandra Birch. Edinburgh neural machine translation systems for wmt 16. arXiv preprint arXiv:1606.02891, 2016. Pierre Sermanet, Soumith Chintala, and Yann LeCun. Convolutional neural networks applied to house numbers digit classiï¬ | 1704.00109#40 | 1704.00109#42 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#42 | Snapshot Ensembles: Train 1, get M for free | cation. In ICPR, 2012. Saurabh Singh, Derek Hoiem, and David Forsyth. Swapout: Learning an ensemble of deep archi- tectures. arXiv preprint arXiv:1605.06465, 2016. Leslie N. Smith. No more pesky learning rate guessing games. CoRR, abs/1506.01186, 2016. URL http://arxiv.org/abs/1506.01186. Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overï¬ | 1704.00109#41 | 1704.00109#43 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#43 | Snapshot Ensembles: Train 1, get M for free | tting. Journal of Machine Learning Research, 15(1):1929â 1958, 2014. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015. A Swann and N Allinson. Fast committee learning: Preliminary results. Electronics Letters, 34(14): 1408â 1410, 1998. Li Wan, Matthew Zeiler, Sixin Zhang, Yann L Cun, and Rob Fergus. | 1704.00109#42 | 1704.00109#44 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#44 | Snapshot Ensembles: Train 1, get M for free | Regularization of neural networks using dropconnect. In ICML, 2013. Jingjing Xie, Bing Xu, and Zhang Chuang. Horizontal and vertical ensemble with deep representa- tion for classiï¬ cation. arXiv preprint arXiv:1306.2759, 2013. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. | 1704.00109#43 | 1704.00109#45 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#45 | Snapshot Ensembles: Train 1, get M for free | 11 Published as a conference paper at ICLR 2017 # SUPPLEMENTARY # A. Single model and Snapshot Ensemble performance over time In Figures 7-9, we compare the test error of Snapshot Ensembles with the error of individual model snapshots. The blue curve shows the test error of a single model snapshot using a cyclic cosine learning rate. The green curve shows the test error when ensembling model snapshots over time. (Note that, unlike Figure 3, we construct these ensembles beginning with the earliest snapshots.) As a reference, the red dashed line in each panel represents the test error of single model trained for 300 epochs using a standard learning rate schedule. Without Snapshot Ensembles, in about half of the cases, the test error of ï¬ nal model using a cyclic learning rateâ the right most point in the blue curveâ is no better than using a standard learning rate schedule. One can observe that under almost all settings, complete Snapshot Ensemblesâ the right most points of the green curvesâ outperform the single model baselines. In many cases, ensembles of just 2 or 3 model snapshots are able to match the performance of the single model trained with a standard learning rate. Not surprisingly, the ensembles of model snapshots consistently outperform any of its members, yielding a smooth curve of test error over time. ResNet-110 on C10 («1,=0.1) ResNet-110 on C10 («1,=0.2) 9 9 Es SI £7 g7 5 5 Soper ESS ee ac 5 5 1 2 3 4 5 1 2 3 4 5 #snapshots #snapshots ResNet-110 on C100 («1,=0.1) ResNet-110 on C100 («1,=0.2) 324 32 = 30 g3 8 28b--â | 1704.00109#44 | 1704.00109#46 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#46 | Snapshot Ensembles: Train 1, get M for free | - 8 28h â SS ye = ge ~~ 3 26 3% 26 3 3 * 24 © 24 1 2 3 4 5 1 2 3 5 #snapshots #snapshots ResNet-110 on SVHN (i,=0.1) ResNet~110 on SVHN (ct,=0.2) N test error (%) a ®& test error (%) a & 1 2 3 4 5 1 2 3 5 #snapshots #snapshots ResNet-110 on Tiny ImageNet (1,=0.1) ResNet-110 on Tiny ImageNet (c1,=0.2) 50 50 test error (%) test error (%) 1 2 3 4 5 6 1 2 3 4 5 6 #snapshots #snapshots Wide-ResNet-32 on C10 («1,=0.1) Wide-ResNet-32 on C10 («1,=0.2) 7 7 1 2 3 4 5 1 2 #snapshots snapshots â Aâ Single model snapshot â Aâ Snapshot Ensemble = = = # Single model with STD-LR| # Figure 7: Single model and Snapshot Ensemble performance over time (part 1). | 1704.00109#45 | 1704.00109#47 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#47 | Snapshot Ensembles: Train 1, get M for free | 12 Published as a conference paper at ICLR 2017 # Wide-ResNet-32 on C100 (o,=0.1) # Wide-ResNet-32 on C100 (a)=0.2) test error (%) 28 7 1 : 3 #snapshots Wide-ResNet-32 on SVHN (0,)=0.1) 2 - r 7 ore 5 1.8 be SNOUT a TT o B 16h dee Leet eeeeeeeedeeeeeuuentes 1 2 3 4 5 #snapshots Wide-ResNet-32 on Tiny ImageNet (o:,=0.1) 42 T T T : = 40k: 5 38 © 36 & 34 32 #snapshots DenseNet-40 on C10 (a,=0.1) 8 : r r . eS 4 S o 7) £ : 4 in i i i 1 2 3 4 5 6 #snapshots DenseNet-40 on C100 (a=0.1) 30 iS Ss 5 Q 2 2 3 4 5 6 #snapshots 28 7 1 : test error (%) NM NM N 3 #snapshots Wide-ResNet-32 on SVHN (a,=0.2) 2 > - 7 test error (%) #snapshots Wide-ResNet-32 on Tiny ImageNet (0,)=0.2) 42 T T T = 401 : : 5 3 S 3 3 3 : ; 32 i i fs i 1 2 3 4 5 6 #snapshots DenseNet-40 on C10 (0,)=0.2) 8 : r . 1 = : : 5 o a7) 2 #snapshots DenseNet-40 on C100 (0,=0.2) 30 : : r + test error (%) 1 2 3 4 5 6 â | 1704.00109#46 | 1704.00109#48 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#48 | Snapshot Ensembles: Train 1, get M for free | Aâ Single model snapshot â Aâ Snapshot Ensemble = = = Single model with STD-LR #snapshots # Figure 8: Single model and Snapshot Ensemble performance over time (part 2). 13 Published as a conference paper at ICLR 2017 DenseNet-40 on SVHN (0.,=0.1) DenseNet-40 on SVHN (1,=0.2) 2 2 21 £18 5 Ci i 8g 8g 1.6 1.6 1 2 3 4 5 1 2 3 4 5 #snapshots #snapshots DenseNet-40 on Tiny ImageNet («1,=0.1) DenseNet-40 on Tiny ImageNet («1,=0.2) 44 44 Sa & 424 2 a £ 40 3 aE Â¥ 3 R38 36 36 4 1 2 3 4 5 6 1 2 3 4 5 6 #snapshots #snapshots DenseNet-100 on C10 («1,=0.1) DenseNet-100 on C10 («1,=0.2) 6 6 #snapshots #snapshots DenseNet-100 on C100 (o1,=0.1) DenseNet-100 on C100 (c1,=0.2) test error (%) test error (%) 1 2 3 4 5 6 1 2 3 4 #snapshots #snapshots â 4â Single model snapshot â Aâ Snapshot Ensemble â = = Single model with STD-LR Figure 9: Single model and Snapshot Ensemble performance over time (part 3). | 1704.00109#47 | 1704.00109#49 | 1704.00109 | [
"1503.02531"
]
|
1704.00109#49 | Snapshot Ensembles: Train 1, get M for free | 14 | 1704.00109#48 | 1704.00109 | [
"1503.02531"
]
|
|
1704.00051#0 | Reading Wikipedia to Answer Open-Domain Questions | 7 1 0 2 r p A 8 2 ] L C . s c [ 2 v 1 5 0 0 0 . 4 0 7 1 : v i X r a # Reading Wikipedia to Answer Open-Domain Questions # Danqi Chenâ Computer Science Stanford University Stanford, CA 94305, USA [email protected] Adam Fisch, Jason Weston & Antoine Bordes Facebook AI Research 770 Broadway New York, NY 10003, USA {afisch,jase,abordes}@fb.com # Abstract to tackle open- This paper proposes using domain the unique knowledge Wikipedia source: the answer to any factoid question is a text span in a Wikipedia article. This task of machine reading at scale combines the challenges of document re- trieval (ï¬ nding the relevant articles) with that of machine comprehension of text (identifying the answer spans from those articles). Our approach combines a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA datasets indicate that (1) both modules to are highly competitive with respect existing counterparts and (2) multitask learning using distant supervision on their combination is an effective complete system on this challenging task. # Introduction This paper considers the problem of answering factoid questions in an open-domain setting us- ing Wikipedia as the unique knowledge source, such as one does when looking for answers in an encyclopedia. Wikipedia is a constantly evolv- ing source of detailed information that could fa- cilitate intelligent machines â | 1704.00051#1 | 1704.00051 | [
"1608.08614"
]
|
|
1704.00051#1 | Reading Wikipedia to Answer Open-Domain Questions | if they are able to leverage its power. Unlike knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) or DB- Pedia (Auer et al., 2007), which are easier for computers to process but too sparsely populated for open-domain question answering (Miller et al., â Most of this work was done while DC was with Face- book AI Research. 2016), Wikipedia contains up-to-date knowledge that humans are interested in. It is designed, how- ever, for humans â not machines â to read. Using Wikipedia articles as the knowledge source causes the task of question answering (QA) to combine the challenges of both large-scale open-domain QA and of machine comprehension of text. In order to answer any question, one must ï¬ rst retrieve the few relevant articles among more than 5 million items, and then scan them care- fully to identify the answer. We term this setting, machine reading at scale (MRS). Our work treats Wikipedia as a collection of articles and does not rely on its internal graph structure. As a result, our approach is generic and could be switched to other collections of documents, books, or even daily up- dated newspapers. Large-scale QA systems like IBMâ s DeepQA (Ferrucci et al., 2010) rely on multiple sources to answer: besides Wikipedia, it is also paired with KBs, dictionaries, and even news articles, books, etc. As a result, such systems heavily rely on information redundancy among the sources to answer correctly. Having a single knowledge source forces the model to be very precise while searching for an answer as the evidence might appear only once. This challenge thus encour- ages research in the ability of a machine to read, a key motivation for the machine comprehen- sion subï¬ eld and the creation of datasets such as SQuAD (Rajpurkar et al., 2016), CNN/Daily Mail (Hermann et al., 2015) and CBT (Hill et al., 2016). those machine comprehension re- sources typically assume that a short piece of rel- evant text is already identiï¬ ed and given to the model, which is not realistic for building an open- In sharp contrast, methods domain QA system. that use KBs or information retrieval over docu- ments have to employ search as an integral part of the solution. | 1704.00051#0 | 1704.00051#2 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#2 | Reading Wikipedia to Answer Open-Domain Questions | Instead MRS is focused on simul- taneously maintaining the challenge of machine comprehension, which requires the deep under- standing of text, while keeping the realistic con- straint of searching over a large open resource. In this paper, we show how multiple existing QA datasets can be used to evaluate MRS by re- quiring an open-domain system to perform well on all of them at once. We develop DrQA, a strong system for question answering from Wikipedia composed of: (1) Document Retriever, a mod- ule using bigram hashing and TF-IDF matching designed to, given a question, efï¬ ciently return a subset of relevant articles and (2) Document Reader, a multi-layer recurrent neural network machine comprehension model trained to detect answer spans in those few returned documents. Figure 1 gives an illustration of DrQA. Our experiments show that Document Retriever outperforms the built-in Wikipedia search engine and that Document Reader reaches state-of-the- art results on the very competitive SQuAD bench- mark (Rajpurkar et al., 2016). Finally, our full sys- In tem is evaluated using multiple benchmarks. particular, we show that performance is improved across all datasets through the use of multitask learning and distant supervision compared to sin- gle task training. # 2 Related Work Open-domain QA was originally deï¬ ned as ï¬ nd- ing answers in collections of unstructured docu- ments, following the setting of the annual TREC competitions1. With the development of KBs, many recent innovations have occurred in the con- text of QA from KBs with the creation of re- sources like WebQuestions (Berant et al., 2013) and SimpleQuestions (Bordes et al., 2015) based on the Freebase KB (Bollacker et al., 2008), or on automatically extracted KBs, e.g., OpenIE triples and NELL (Fader et al., 2014). However, KBs have inherent limitations (incompleteness, ï¬ xed schemas) that motivated researchers to return to the original setting of answering from raw text. A second motivation to cast a fresh look at this problem is that of machine comprehension of text, i.e., answering questions after reading a short text or story. | 1704.00051#1 | 1704.00051#3 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#3 | Reading Wikipedia to Answer Open-Domain Questions | That subï¬ eld has made consider- able progress recently thanks to new deep learning architectures like attention-based and memory- # 1http://trec.nist.gov/data/qamain.html augmented neural networks (Bahdanau et al., 2015; Weston et al., 2015; Graves et al., 2014) and release of new training and evaluation datasets like QuizBowl (Iyyer et al., 2014), CNN/Daily Mail based on news articles (Hermann et al., 2015), CBT based on children books (Hill et al., 2016), or SQuAD (Rajpurkar et al., 2016) and WikiReading (Hewlett et al., 2016), both based on Wikipedia. An objective of this paper is to test how such new methods can perform in an open-domain QA framework. QA using Wikipedia as a resource has been ex- plored previously. Ryu et al. (2014) perform open- domain QA using a Wikipedia-based knowledge model. They combine article content with multi- ple other answer matching modules based on dif- ferent types of semi-structured knowledge such as infoboxes, article structure, category structure, and deï¬ | 1704.00051#2 | 1704.00051#4 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#4 | Reading Wikipedia to Answer Open-Domain Questions | nitions. Similarly, Ahn et al. (2004) also combine Wikipedia as a text resource with other resources, in this case with information retrieval over other documents. Buscaldi and Rosso (2006) also mine knowledge from Wikipedia for QA. In- stead of using it as a resource for seeking answers to questions, they focus on validating answers re- turned by their QA system, and use Wikipedia categories for determining a set of patterns that should ï¬ t with the expected answer. In our work, we consider the comprehension of text only, and use Wikipedia text documents as the sole resource in order to emphasize the task of machine reading at scale, as described in the introduction. There are a number of highly developed full pipeline QA approaches using either the Web, as does QuASE (Sun et al., 2015), or Wikipedia as a resource, as do Microsoftâ s AskMSR (Brill et al., 2002), IBMâ s DeepQA (Ferrucci et al., 2010) and YodaQA (BaudiË s, 2015; BaudiË s and Ë Sediv`y, 2015) â the latter of which is open source and hence reproducible for comparison purposes. AskMSR is a search-engine based QA system that relies on â data redundancy rather than sophisticated lin- guistic analyses of either questions or candidate answersâ , i.e., it does not focus on machine com- prehension, as we do. DeepQA is a very sophisti- cated system that relies on both unstructured infor- mation including text documents as well as struc- tured data such as KBs, databases and ontologies to generate candidate answers or vote over evi- dence. YodaQA is an open source system mod- eled after DeepQA, similarly combining websites, Open-domain QA SQuAD, TREC, WebQuestions, WikiMovies Q: | 1704.00051#3 | 1704.00051#5 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#5 | Reading Wikipedia to Answer Open-Domain Questions | How many of Warsaw's inhabitants spoke Polish in 1933? Document Retriever â â orm WIKIPEDIA The Free Encyclopedia 7 Document Reader â â > 833,500 Figure 1: An overview of our question answering system DrQA. information extraction, databases and Wikipedia in particular. Our comprehension task is made more challenging by only using a single resource. Comparing against these methods provides a use- ful datapoint for an â upper boundâ benchmark on performance. Multitask learning (Caruana, 1998) and task transfer have a rich history in machine learning (e.g., using ImageNet in the computer vision com- munity (Huh et al., 2016)), as well as in NLP in particular (Collobert and Weston, 2008). Sev- eral works have attempted to combine multiple QA training datasets via multitask learning to (i) achieve improvement across the datasets via task transfer; and (ii) provide a single general system capable of asking different kinds of questions due to the inevitably different data distributions across the source datasets. Fader et al. (2014) used We- bQuestions, TREC and WikiAnswers with four KBs as knowledge sources and reported improve- ment on the latter two datasets through multi- task learning. Bordes et al. (2015) combined We- bQuestions and SimpleQuestions using distant su- pervision with Freebase as the KB to give slight improvements on both datasets, although poor per- formance was reported when training on only one dataset and testing on the other, showing that task transfer is indeed a challenging subject; see also (Kadlec et al., 2016) for a similar conclusion. Our work follows similar themes, but in the setting of having to retrieve and then read text documents, rather than using a KB, with positive results. # 3 Our System: DrQA In the following we describe our system DrQA for MRS which consists of two components: (1) the Document Retriever module for ï¬ nding relevant articles and (2) a machine comprehension model, Document Reader, for extracting answers from a single document or a small collection of docu- ments. # 3.1 Document Retriever Following classical QA systems, we use an efï¬ - cient (non-machine learning) document retrieval system to ï¬ | 1704.00051#4 | 1704.00051#6 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#6 | Reading Wikipedia to Answer Open-Domain Questions | rst narrow our search space and focus on reading only articles that are likely to be rel- evant. A simple inverted index lookup followed by term vector model scoring performs quite well on this task for many question types, compared to the built-in ElasticSearch based Wikipedia Search API (Gormley and Tong, 2015). Articles and ques- tions are compared as TF-IDF weighted bag-of- word vectors. We further improve our system by taking local word order into account with n-gram features. Our best performing system uses bigram counts while preserving speed and memory efï¬ - ciency by using the hashing of (Weinberger et al., 2009) to map the bigrams to 224 bins with an un- signed murmur3 hash. We use Document Retriever as the ï¬ rst part of our full model, by setting it to return 5 Wikipedia articles given any question. Those articles are then processed by Document Reader. # 3.2 Document Reader Our Document Reader model is inspired by the re- cent success of neural network models on machine comprehension tasks, in a similar spirit to the At- tentiveReader described in (Hermann et al., 2015; Chen et al., 2016). tokens {q1, . . . , ql} and a document or a small set of doc- uments of n paragraphs where a single paragraph p consists of m tokens {p1, . . . , pm}, we develop an RNN model that we apply to each paragraph in turn and then ï¬ nally aggregate the predicted an- swers. Our method works as follows: Paragraph encoding We ï¬ rst represent all to- kens pi in a paragraph p as a sequence of feature vectors Ë pi â Rd and pass them as the input to a recurrent neural network and thus obtain: {p1, . . . , pm} = RNN({Ë p1, . . . , Ë pm}), where pi is expected to encode useful context information around token pi. Speciï¬ cally, we choose to use a multi-layer bidirectional long short-term memory network (LSTM), and take pi as the concatenation of each layerâ s hidden units in the end. The feature vector Ë pi is comprised of the fol- lowing parts: â ¢ Word embeddings: femb(pi) = E(pi). | 1704.00051#5 | 1704.00051#7 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#7 | Reading Wikipedia to Answer Open-Domain Questions | We use the 300-dimensional Glove word em- beddings trained from 840B Web crawl data (Pennington et al., 2014). We keep most of the pre-trained word embeddings ï¬ xed and only ï¬ ne-tune the 1000 most frequent ques- tion words because the representations of some key words such as what, how, which, many could be crucial for QA systems. â ¢ Exact match: fexact match(pi) = I(pi â q). We use three simple binary features, indicat- ing whether pi can be exactly matched to one question word in q, either in its original, low- ercase or lemma form. These simple features turn out to be extremely helpful, as we will show in Section 5. â ¢ Token features: # ftoken(pi) = (POS(pi), NER(pi), TF(pi)). We also add a few manual features which re- ï¬ ect some properties of token pi in its con- text, which include its part-of-speech (POS) and named entity recognition (NER) tags and its (normalized) term frequency (TF). Aligned question embedding: Following (Lee et al., 2016) and other re- cent works, the last part we incorporate is an aligned question embedding fatign(pi) = D2; %,jE(qj), where the attention score a,j captures the similarity between p; and each question words q;. Specifically, a;,; is com- puted by the dot products between nonlinear mappings of word embeddings: us, exp (a(E(pi)) - o(E(q))) Dy exp (a(E(pi)) - a(E(q))) â and α(·) is a single dense layer with ReLU nonlinearity. Compared to the exact match features, these features add soft alignments between similar but non-identical words (e.g., car and vehicle). Question encoding The question encoding is simpler, as we only apply another recurrent neu- ral network on top of the word embeddings of q; and combine the resulting hidden units into one single vector: {qi,...,qi} â > g. We compute qa=>> j 0;4; where b; encodes the importance of each question word: b, = â oxp(w ay) 1 Dy exp(w ay) and w is a weight vector to learn. | 1704.00051#6 | 1704.00051#8 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#8 | Reading Wikipedia to Answer Open-Domain Questions | Prediction At the paragraph level, the goal is to predict the span of tokens that is most likely the correct answer. We take the the paragraph vectors {p1, . . . , pm} and the question vector q as input, and simply train two classiï¬ ers independently for predicting the two ends of the span. Concretely, we use a bilinear term to capture the similarity be- tween pi and q and compute the probabilities of each token being start and end as: Pstart(i) â exp (piWsq) Pend(i) â exp (piWeq) During prediction, we choose the best span from token 7 to token 7â such that i < iâ ! < i +15 and Pstart(t) X Pena(iâ ) is maximized. To make scores compatible across paragraphs in one or several re- trieved documents, we use the unnormalized expo- nential and take argmax over all considered para- graph spans for our ï¬ nal prediction. # 4 Data | 1704.00051#7 | 1704.00051#9 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#9 | Reading Wikipedia to Answer Open-Domain Questions | Our work relies on three types of data: (1) Wikipedia that serves as our knowledge source for ï¬ nding answers, (2) the SQuAD dataset which is our main resource to train Document Reader and (3) three more QA datasets (CuratedTREC, We- bQuestions and WikiMovies) that in addition to SQuAD, are used to test the open-domain QA abil- ities of our full system, and to evaluate the ability of our model to learn from multitask learning and distant supervision. Statistics of the datasets are given in Table 2. 4.1 Wikipedia (Knowledge Source) We use the 2016-12-21 dump2 of English Wikipedia for all of our full-scale experiments as the knowledge source used to answer questions. For each page, only the plain text is extracted and all structured data sections such as lists and ï¬ g- ures are stripped.3 After discarding internal dis- ambiguation, list, index, and outline pages, we retain 5,075,182 articles consisting of 9,008,962 unique uncased token types. # 4.2 SQuAD The Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) is a dataset for machine comprehension based on Wikipedia. The dataset contains 87k examples for training and 10k for development, with a large hidden test set which can only be accessed by the SQuAD creators. Each example is composed of a paragraph extracted from a Wikipedia article and an associated human-generated question. The answer is always a span from this paragraph and a model is given credit if its predicted answer matches it. Two evaluation metrics are used: exact string match (EM) and F1 score, which measures the weighted average of precision and recall at the token level. In the following, we use SQuAD for training and evaluating our Document Reader for the stan- dard machine comprehension task given the rel- | 1704.00051#8 | 1704.00051#10 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#10 | Reading Wikipedia to Answer Open-Domain Questions | 2https://dumps.wikimedia.org/enwiki/ latest 3We use the WikiExtractor script: https://github. com/attardi/wikiextractor. evant paragraph as deï¬ ned in (Rajpurkar et al., 2016). For the task of evaluating open-domain question answering over Wikipedia, we use the SQuAD development set QA pairs only, and we ask systems to uncover the correct answer spans without having access to the associated para- graphs. That is, a model is required to answer a question given the whole of Wikipedia as a re- source; it is not given the relevant paragraph as in the standard SQuAD setting. # 4.3 Open-domain QA Evaluation Resources SQuAD is one of the largest general purpose QA datasets currently available. SQuAD questions have been collected via a process involving show- ing a paragraph to each human annotator and ask- ing them to write a question. | 1704.00051#9 | 1704.00051#11 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#11 | Reading Wikipedia to Answer Open-Domain Questions | As a result, their distribution is quite speciï¬ c. We hence propose to train and evaluate our system on other datasets de- veloped for open-domain QA that have been con- structed in different ways (not necessarily in the context of answering from Wikipedia). CuratedTREC This dataset is based on the benchmarks from the TREC QA tasks that have been curated by BaudiË s and Ë Sediv`y (2015). We use the large version, which contains a total of 2,180 questions extracted from the datasets from TREC 1999, 2000, 2001 and 2002.4 WebQuestions Introduced in (Berant et al., 2013), this dataset is built to answer questions from the Freebase KB. It was created by crawling questions through the Google Suggest API, and then obtaining answers using Amazon Mechani- cal Turk. We convert each answer to text by us- ing entity names so that the dataset does not refer- ence Freebase IDs and is purely made of plain text question-answer pairs. WikiMovies This dataset, introduced in (Miller et al., 2016), contains 96k question-answer pairs in the domain of movies. Originally created from the OMDb and MovieLens databases, the examples are built such that they can also be answered by us- ing a subset of Wikipedia as the knowledge source (the title and the ï¬ rst section of articles from the movie domain). 4This dataset is available at https://github.com/ brmson/dataset-factoid-curated. Dataset SQuAD Example Q: | 1704.00051#10 | 1704.00051#12 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#12 | Reading Wikipedia to Answer Open-Domain Questions | How many provinces did the Ottoman empire contain in the 17th century? A: 32 CuratedTREC Q: What U.S. stateâ s motto is â Live free or Dieâ ? A: New Hampshire WebQuestions Q: What part of the atom did Chadwick discover?â A: neutron WikiMovies Q: Who wrote the ï¬ lm Gigli? A: Martin Brest Article / Paragraph Article: Ottoman Empire Paragraph: ... At the beginning of the 17th century the em- pire contained 32 provinces and numerous vassal states. Some of these were later absorbed into the Ottoman Empire, while others were granted various types of autonomy during the course of centuries. | 1704.00051#11 | 1704.00051#13 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#13 | Reading Wikipedia to Answer Open-Domain Questions | Article: Live Free or Die Paragraph: â Live Free or Dieâ is the ofï¬ cial motto of the U.S. state of New Hampshire, adopted by the state in 1945. It is possibly the best-known of all state mottos, partly because it conveys an assertive independence historically found in Amer- ican political philosophy and partly because of its contrast to the milder sentiments found in other state mottos. Article: Atom Paragraph: ... The atomic mass of these isotopes varied by integer amounts, called the whole number rule. The explana- tion for these different isotopes awaited the discovery of the neutron, an uncharged particle with a mass similar to the pro- ton, by the physicist James Chadwick in 1932. ... | 1704.00051#12 | 1704.00051#14 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#14 | Reading Wikipedia to Answer Open-Domain Questions | Article: Gigli Paragraph: Gigli is a 2003 American romantic comedy ï¬ lm written and directed by Martin Brest and starring Ben Afï¬ eck, Jennifer Lopez, Justin Bartha, Al Pacino, Christopher Walken, and Lainie Kazan. Table 1: Example training data from each QA dataset. In each case we show an associated paragraph where distant supervision (DS) correctly identiï¬ ed the answer within it, which is highlighted. Dataset SQuAD CuratedTREC WebQuestions WikiMovies Train Test Plain DS 87,599 71,231 10,570â 1,486â 3,464 694 3,778â 4,602 2,032 96,185â 36,301 9,952 Dataset SQuAD CuratedTREC WebQuestions WikiMovies Wiki Search 62.7 81.0 73.7 61.7 Doc. Retriever plain +bigrams 76.1 85.2 75.5 54.4 77.8 86.0 74.4 70.3 Table 2: Number of questions for each dataset used in this paper. DS: distantly supervised train- ing data. â : These training sets are not used as is because no paragraph is associated with each question. â : Corresponds to SQuAD development set. # 4.4 Distantly Supervised Data All the QA datasets presented above contain train- ing portions, but CuratedTREC, WebQuestions and WikiMovies only contain question-answer pairs, and not an associated document or para- graph as in SQuAD, and hence cannot be used for training Document Reader directly. Follow- ing previous work on distant supervision (DS) for relation extraction (Mintz et al., 2009), we use a procedure to automatically associate paragraphs to such training examples, and then add these exam- ples to our training set. We use the following process for each question- answer pair to build our training set. First, we Table 3: Document retrieval results. % of ques- tions for which the answer segment appears in one of the top 5 pages returned by the method. run Document Retriever on the question to re- trieve the top 5 Wikipedia articles. All paragraphs from those articles without an exact match of the known answer are directly discarded. | 1704.00051#13 | 1704.00051#15 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#15 | Reading Wikipedia to Answer Open-Domain Questions | All para- graphs shorter than 25 or longer than 1500 charac- ters are also ï¬ ltered out. If any named entities are detected in the question, we remove any paragraph that does not contain them at all. For every remain- ing paragraph in each retrieved page, we score all positions that match an answer using unigram and bigram overlap between the question and a 20 to- ken window, keeping up to the top 5 paragraphs with the highest overlaps. If there is no paragraph with non-zero overlap, the example is discarded; otherwise we add each found pair to our DS train- ing dataset. Some examples are shown in Table 1 and data statistics are given in Table 2. Note that we can also generate additional DS data for SQuAD by trying to ï¬ nd mentions of the answers not just in the paragraph provided, but also from other pages or the same page that the given paragraph was in. We observe that around half of the DS examples come from pages outside of the articles used in SQuAD. # 5 Experiments This section ï¬ rst presents evaluations of our Doc- ument Retriever and Document Reader modules separately, and then describes tests of their com- bination, DrQA, for open-domain QA on the full Wikipedia. # 5.1 Finding Relevant Articles | 1704.00051#14 | 1704.00051#16 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#16 | Reading Wikipedia to Answer Open-Domain Questions | We ï¬ rst examine the performance of our Docu- ment Retriever module on all the QA datasets. Ta- ble 3 compares the performance of the two ap- proaches described in Section 3.1 with that of the Wikipedia Search Engine5 for the task of ï¬ nd- ing articles that contain the answer given a ques- tion. Speciï¬ cally, we compute the ratio of ques- tions for which the text span of any of their as- sociated answers appear in at least one the top 5 relevant pages returned by each system. Results on all datasets indicate that our simple approach outperforms Wikipedia Search, especially with bi- gram hashing. We also compare doing retrieval with Okapi BM25 or by using cosine distance in the word embeddings space (by encoding ques- tions and articles as bag-of-embeddings), both of which we ï¬ nd performed worse. # 5.2 Reader Evaluation on SQuAD Next we evaluate our Document Reader com- ponent on the standard SQuAD evaluation (Ra- jpurkar et al., 2016). Implementation details We use 3-layer bidirec- tional LSTMs with h = 128 hidden units for both paragraph and question encoding. We apply the Stanford CoreNLP toolkit (Manning et al., 2014) for tokenization and also generating lemma, part- of-speech, and named entity tags. Lastly, all the training examples are sorted by the length of paragraph and divided into mini- batches of 32 examples each. We use Adamax for optimization as described in (Kingma and Ba, | 1704.00051#15 | 1704.00051#17 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#17 | Reading Wikipedia to Answer Open-Domain Questions | 5We use the Wikipedia Search API https://www. mediawiki.org/wiki/API:Search. 2014). Dropout with p = 0.3 is applied to word embeddings and all the hidden units of LSTMs. Result and analysis Table 4 presents our eval- uation results on both development and test sets. SQuAD has been a very competitive machine comprehension benchmark since its creation and we only list the best-performing systems in the ta- ble. Our system (single model) can achieve 70.0% exact match and 79.0% F1 scores on the test set, which surpasses all the published results and can match the top performance on the SQuAD leader- board at the time of writing. Additionally, we think that our model is conceptually simpler than most of the existing systems. We conducted an ablation analysis on the feature vector of para- graph tokens. As shown in Table 5 all the features contribute to the performance of our ï¬ | 1704.00051#16 | 1704.00051#18 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#18 | Reading Wikipedia to Answer Open-Domain Questions | nal system. Without the aligned question embedding feature (only word embedding and a few manual features), our system is still able to achieve F1 over 77%. More interestingly, if we remove both faligned and fexact match, the performance drops dramatically, so we conclude that both features play a similar but complementary role in the feature representa- tion related to the paraphrased nature of a question vs. the context around an answer. # 5.3 Full Wikipedia Question Answering Finally, we assess the performance of our full sys- tem DrQA for answering open-domain questions using the four datasets introduced in Section 4. We compare three versions of DrQA which eval- uate the impact of using distant supervision and multitask learning across the training sources pro- vided to Document Reader (Document Retriever remains the same for each case): | 1704.00051#17 | 1704.00051#19 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#19 | Reading Wikipedia to Answer Open-Domain Questions | â ¢ SQuAD: A single Document Reader model is trained on the SQuAD training set only and used on all evaluation sets. â ¢ Fine-tune (DS): A Document Reader model is pre-trained on SQuAD and then ï¬ ne-tuned for each dataset independently using its dis- tant supervision (DS) training set. â ¢ Multitask (DS): A single Document Reader model is jointly trained on the SQuAD train- ing set and all the DS sources. For the full Wikipedia setting we use a stream- lined model that does not use the CoreNLP parsed ftoken features or lemmas for fexact match. | 1704.00051#18 | 1704.00051#20 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#20 | Reading Wikipedia to Answer Open-Domain Questions | We Method Dynamic Coattention Networks (Xiong et al., 2016) Multi-Perspective Matching (Wang et al., 2016)â BiDAF (Seo et al., 2016) R-netâ DrQA (Our model, Document Reader Only) Dev EM F1 65.4 75.6 66.1 75.8 67.7 77.3 n/a n/a 69.5 78.8 Test EM F1 66.2 75.9 65.5 75.1 68.0 77.3 71.3 79.7 70.0 79.0 Table 4: Evaluation results on the SQuAD dataset (single model only). â : Test results reï¬ ect the SQuAD leaderboard (https://stanford-qa.com) as of Feb 6, 2017. Features Full No ftoken No fexact match No faligned No faligned and fexact match F1 78.8 78.0 (-0.8) 77.3 (-1.5) 77.3 (-1.5) 59.4 (-19.4) Table 5: Feature ablation analysis of the paragraph representations of our Document Reader. Results are reported on the SQuAD development set. | 1704.00051#19 | 1704.00051#21 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#21 | Reading Wikipedia to Answer Open-Domain Questions | ï¬ nd that while these help for more exact paragraph reading in SQuAD, they donâ t improve results in the full setting. Additionally, WebQuestions and WikiMovies provide a list of candidate answers (e.g., 1.6 million Freebase entity strings for We- bQuestions) and we restrict the answer span must be in this list during prediction. Results Table 6 presents the results. Despite the difï¬ culty of the task compared to machine com- prehension (where you are given the right para- graph) and unconstrained QA (using redundant re- sources), DrQA still provides reasonable perfor- mance across all four datasets. We compare to an unconstrained QA system us- ing redundant resources (not just Wikipedia), Yo- daQA (BaudiË s, 2015), giving results which were previously reported on CuratedTREC and We- bQuestions. Despite the increased difï¬ culty of our task, it is reassuring that our performance is not too far behind on CuratedTREC (31.3 vs. 25.4). The gap is slightly bigger on WebQuestions, likely because this dataset was created from the speciï¬ c structure of Freebase which YodaQA uses directly. DrQAâ s performance on SQuAD compared to its Document Reader component on machine com- prehension in Table 4 shows a large drop (from 69.5 to 27.1) as we now are given Wikipedia to read, not a single paragraph. Given the correct document (but not the paragraph) we can achieve 49.4, indicating many false positives come from highly topical sentences. This is despite the fact that the Document Retriever works relatively well (77.8% of the time retrieving the answer, see Ta- ble 3). It is worth noting that a large part of the drop comes from the nature of the SQuAD ques- tions. They were written with a speciï¬ c para- graph in mind, thus their language can be ambigu- ous when the context is removed. Additional re- sources other than SQuAD, speciï¬ cally designed for MRS, might be needed to go further. We are interested in a single, full system that can answer any question using Wikipedia. | 1704.00051#20 | 1704.00051#22 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#22 | Reading Wikipedia to Answer Open-Domain Questions | The single model trained only on SQuAD is outper- formed on all four of the datasets by the multitask model that uses distant supervision. However per- formance when training on SQuAD alone is not far behind, indicating that task transfer is occurring. The majority of the improvement from SQuAD to Multitask (DS) however is likely not from task transfer as ï¬ ne-tuning on each dataset alone using DS also gives improvements, showing that is is the introduction of extra data in the same domain that helps. Nevertheless, the best single model that we can ï¬ nd is our overall goal, and that is the Multi- task (DS) system. # 6 Conclusion We studied the task of machine reading at scale, by using Wikipedia as the unique knowledge source for open-domain QA. Our results indicate that MRS is a key challenging task for researchers to focus on. Machine comprehension systems alone cannot solve the overall task. Our method integrates search, distant supervision, and mul- titask learning to provide an effective complete system. Evaluating the individual components as well as the full system across multiple benchmarks showed the efï¬ cacy of our approach. Dataset YodaQA SQuAD (All Wikipedia) CuratedTREC WebQuestions WikiMovies n/a 31.3 39.8 n/a 27.1 19.7 11.8 24.5 28.4 25.7 19.5 34.3 29.8 25.4 20.7 36.5 Table 6: Full Wikipedia results. Top-1 exact-match accuracy (in %, using SQuAD eval script). +Fine- tune (DS): Document Reader models trained on SQuAD and ï¬ ne-tuned on each DS training set inde- pendently. +Multitask (DS): Document Reader single model trained on SQuAD and all the distant su- pervision (DS) training sets jointly. YodaQA results are extracted from https://github.com/brmson/ yodaqa/wiki/Benchmarks and use additional resources such as Freebase and DBpedia, see Section 2. Future work should aim to improve over our DrQA system. | 1704.00051#21 | 1704.00051#23 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#23 | Reading Wikipedia to Answer Open-Domain Questions | Two obvious angles of attack are: (i) incorporate the fact that Document Reader ag- gregates over multiple paragraphs and documents directly in the training, as it currently trains on paragraphs independently; and (ii) perform end- to-end training across the Document Retriever and Document Reader pipeline, rather than indepen- dent systems. from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP). pages 1533â 1544. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collab- oratively created graph database for structuring hu- man knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data. AcM, pages 1247â 1250. # Acknowledgments The authors thank Pranav Rajpurkar for testing Document Reader on the test set of SQuAD. # References Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075 . Eric Brill, Susan Dumais, and Michele Banko. 2002. An analysis of the AskMSR question-answering sys- In Empirical Methods in Natural Language tem. Processing (EMNLP). pages 257â 264. David Ahn, Valentin Jijkoun, Gilad Mishne, Karin Mller, Maarten de Rijke, and Stefan Schlobach. 2004. Using wikipedia at the trec qa track. In Pro- ceedings of TREC 2004. Davide Buscaldi and Paolo Rosso. 2006. | 1704.00051#22 | 1704.00051#24 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#24 | Reading Wikipedia to Answer Open-Domain Questions | Mining knowledge from Wikipedia for the question answer- ing task. In International Conference on Language Resources and Evaluation (LREC). pages 727â 730. S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The semantic web, Springer, pages 722â 735. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Con- ference on Learning Representations (ICLR). | 1704.00051#23 | 1704.00051#25 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#25 | Reading Wikipedia to Answer Open-Domain Questions | Petr BaudiË s. 2015. YodaQA: a modular question an- swering system pipeline. In POSTER 2015-19th In- ternational Student Conference on Electrical Engi- neering. pages 1156â 1165. Petr BaudiË s and Jan Ë Sediv`y. 2015. Modeling of the question answering task in the YodaQA sys- In International Conference of the Cross- tem. Language Evaluation Forum for European Lan- guages. Springer, pages 222â 228. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. | 1704.00051#24 | 1704.00051#26 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#26 | Reading Wikipedia to Answer Open-Domain Questions | Semantic parsing on freebase Rich Caruana. 1998. Multitask learning. In Learning to learn, Springer, pages 95â 133. Danqi Chen, Jason Bolton, and Christopher D Man- the In ning. 2016. A thorough examination of CNN/Daily Mail reading comprehension task. Association for Computational Linguistics (ACL). Ronan Collobert and Jason Weston. 2008. A uniï¬ ed architecture for natural language processing: deep neural networks with multitask learning. In Interna- tional Conference on Machine Learning (ICML). Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and In ACM SIGKDD in- extracted knowledge bases. ternational conference on Knowledge discovery and data mining. pages 1156â 1165. David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, et al. 2010. Building Watson: An overview of the DeepQA project. AI magazine 31(3):59â 79. Clinton Gormley and Zachary Tong. 2015. | 1704.00051#25 | 1704.00051#27 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#27 | Reading Wikipedia to Answer Open-Domain Questions | Elastic- search: The Deï¬ nitive Guide. â Oâ Reilly Media, Inc.â . Alex Graves, Greg Wayne, and Ivo Danihelka. arXiv preprint 2014. Neural turing machines. arXiv:1410.5401 . Karl Moritz Hermann, Tom´aË s KoË cisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. | 1704.00051#26 | 1704.00051#28 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#28 | Reading Wikipedia to Answer Open-Domain Questions | Teaching ma- chines to read and comprehend. In Advances in Neu- ral Information Processing Systems (NIPS). Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. 2016. Wikireading: A novel large-scale language understanding task over In Association for Computational Lin- wikipedia. guistics (ACL). pages 1535â 1545. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. | 1704.00051#27 | 1704.00051#29 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#29 | Reading Wikipedia to Answer Open-Domain Questions | The Goldilocks Principle: Reading childrenâ s books with explicit memory representa- tions. In International Conference on Learning Rep- resentations (ICLR). Minyoung Huh, Pulkit Agrawal, and Alexei A Efros. 2016. What makes ImageNet good for transfer learning? arXiv preprint arXiv:1608.08614 . Jordan L Boyd-Graber, Leonardo Max Batista Claudino, Richard Socher, and Hal Daum´e III. 2014. A neural network for factoid ques- tion answering over paragraphs. In Empirical Meth- ods in Natural Language Processing (EMNLP). pages 633â 644. | 1704.00051#28 | 1704.00051#30 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#30 | Reading Wikipedia to Answer Open-Domain Questions | Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. 2016. From particular to general: A preliminary case study of transfer learning in reading compre- hension. Machine Intelligence Workshop, NIPS . Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Kenton Lee, Tom Kwiatkowski, Ankur Parikh, and Di- panjan Das. 2016. Learning recurrent span repre- sentations for extractive question answering. arXiv preprint arXiv:1611.01436 . Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, and David Mc- Closky. 2014. The stanford corenlp natural lan- In Association for Com- guage processing toolkit. putational Linguistics (ACL). pages 55â 60. Alexander H. Miller, Adam Fisch, Jesse Dodge, Amir- Hossein Karimi, Antoine Bordes, and Jason We- ston. 2016. | 1704.00051#29 | 1704.00051#31 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#31 | Reading Wikipedia to Answer Open-Domain Questions | Key-value memory networks for directly In Empirical Methods in Nat- reading documents. ural Language Processing (EMNLP). pages 1400â 1409. Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation In Association extraction without labeled data. for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL/IJCNLP). pages 1003â 1011. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word In Empirical Methods in Natural representation. Language Processing (EMNLP). pages 1532â | 1704.00051#30 | 1704.00051#32 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#32 | Reading Wikipedia to Answer Open-Domain Questions | 1543. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Empirical Meth- ods in Natural Language Processing (EMNLP). Pum-Mo Ryu, Myung-Gil Jang, and Hyun-Ki Kim. 2014. Open domain question answering using Information Wikipedia-based knowledge model. Processing & Management 50(5):683â 692. | 1704.00051#31 | 1704.00051#33 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#33 | Reading Wikipedia to Answer Open-Domain Questions | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention ï¬ ow for machine comprehension. arXiv preprint arXiv:1611.01603 . Huan Sun, Hao Ma, Wen-tau Yih, Chen-Tse Tsai, Jingjing Liu, and Ming-Wei Chang. 2015. Open do- main question answering via semantic enrichment. In Proceedings of the 24th International Conference on World Wide Web. ACM, pages 1045â 1055. Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016. Multi-perspective context match- arXiv preprint ing for machine comprehension. arXiv:1612.04211 . Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. 2009. Feature hashing for large scale multitask learning. In Inter- national Conference on Machine Learning (ICML). pages 1113â | 1704.00051#32 | 1704.00051#34 | 1704.00051 | [
"1608.08614"
]
|
1704.00051#34 | Reading Wikipedia to Answer Open-Domain Questions | 1120. Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory networks. In International Confer- ence on Learning Representations (ICLR). Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604 . | 1704.00051#33 | 1704.00051 | [
"1608.08614"
]
|
|
1703.09844#0 | Multi-Scale Dense Networks for Resource Efficient Image Classification | 8 1 0 2 n u J 7 ] G L . s c [ 5 v 4 4 8 9 0 . 3 0 7 1 : v i X r a Published as a conference paper at ICLR 2018 MULTI-SCALE DENSE NETWORKS FOR RESOURCE EFFICIENT IMAGE CLASSIFICATION Gao Huang Cornell University Danlu Chen Fudan University Tianhong Li Tsinghua University Laurens van der Maaten Facebook AI Research Kilian Weinberger Cornell University # Felix Wu Cornell University # ABSTRACT In this paper we investigate image classiï¬ cation with computational resource lim- its at test time. Two such settings are: 1. anytime classiï¬ cation, where the net- workâ s prediction for a test example is progressively updated, facilitating the out- put of a prediction at any time; and 2. budgeted batch classiï¬ cation, where a ï¬ xed amount of computation is available to classify a set of examples that can be spent unevenly across â easierâ and â harderâ inputs. | 1703.09844#1 | 1703.09844 | [
"1702.07780"
]
|
|
1703.09844#1 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In contrast to most prior work, such as the popular Viola and Jones algorithm, our approach is based on convolutional neural networks. We train multiple classiï¬ ers with varying resource demands, which we adaptively apply during test time. To maximally re-use computation between the classiï¬ ers, we incorporate them as early-exits into a single deep con- volutional neural network and inter-connect them with dense connectivity. To fa- cilitate high quality classiï¬ cation early on, we use a two-dimensional multi-scale network architecture that maintains coarse and ï¬ ne level features all-throughout the network. Experiments on three image-classiï¬ cation tasks demonstrate that our framework substantially improves the existing state-of-the-art in both settings. # INTRODUCTION Recent years have witnessed a surge in demand for applications of visual object recognition, for instance, in self-driving cars (Bojarski et al., 2016) and content-based image search (Wan et al., 2014). This demand has in part been fueled through the promise generated by the astonishing progress of convolutional networks (CNNs) on visual object recognition benchmark competition datasets, such as ILSVRC (Deng et al., 2009) and COCO (Lin et al., 2014), where state-of-the-art models may have even surpassed human-level performance (He et al., 2015; 2016). However, the requirements of such competitions differ from real- world applications, which tend to incentivize resource-hungry mod- els with high computational demands at inference time. For exam- ple, the COCO 2016 competition was won by a large ensemble of computationally intensive CNNs1 â a model likely far too compu- tationally expensive for any resource-aware application. Although much smaller models would also obtain decent error, very large, computationally intensive models seem necessary to correctly clas- sify the hard examples that make up the bulk of the remaining mis- classiï¬ cations of modern algorithms. To illustrate this point, Fig- ure 1 shows two images of horses. The left image depicts a horse in canonical pose and is easy to classify, whereas the right image is taken from a rare viewpoint and is likely in the tail of the data dis- tribution. | 1703.09844#0 | 1703.09844#2 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#2 | Multi-Scale Dense Networks for Resource Efficient Image Classification | Computationally intensive models are needed to classify such tail examples correctly, but are wasteful when applied to canonical images such as the left one. Â¥ In real-world applications, computation directly translates into power consumption, which should be minimized for environmental and economical reasons, and is a scarce commodity on mobile 1http://image-net.org/challenges/talks/2016/GRMI-COCO-slidedeck.pdf 1 Published as a conference paper at ICLR 2018 devices. This begs the question: why do we choose between either wasting computational resources by applying an unnecessarily computationally expensive model to easy images, or making mistakes by using an efï¬ cient model that fails to recognize difï¬ cult images? Ideally, our systems should automatically use small networks when test images are easy or computational resources limited, and use big networks when test images are hard or computation is abundant. Such systems would be beneï¬ cial in at least two settings with computational constraints at test- time: anytime prediction, where the network can be forced to output a prediction at any given point in time; and budgeted batch classiï¬ cation, where a ï¬ xed computational budget is shared across a large set of examples which can be spent unevenly across â easyâ and â hardâ examples. A prac- tical use-case of anytime prediction is in mobile apps on Android devices: in 2015, there existed 24, 093 distinct Android devices2, each with its own distinct computational limitations. It is infea- sible to train a different network that processes video frame-by-frame at a ï¬ xed framerate for each of these devices. Instead, you would like to train a single network that maximizes accuracy on all these devices, within the computational constraints of that device. The budget batch classiï¬ cation setting is ubiquitous in large-scale machine learning applications. Search engines, social media companies, on-line advertising agencies, all must process large volumes of data on limited hardware resources. For example, as of 2010, Google Image Search had over 10 Billion images indexed3, which has likely grown to over 1 Trillion since. Even if a new model to process these images is only 1/10s slower per image, this additional cost would add 3170 years of CPU time. | 1703.09844#1 | 1703.09844#3 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#3 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In the budget batch classiï¬ cation setting, companies can improve the average accuracy by reducing the amount of computation spent on â easyâ cases to save up computation for â hardâ cases. Motivated by prior work in computer vision on resource-efï¬ cient recognition (Viola & Jones, 2001), we aim to develop CNNs that â sliceâ the computation and process these slices one-by-one, stopping the evaluation once the CPU time is depleted or the classiï¬ cation sufï¬ ciently certain (through â early exitsâ ). Unfortunately, the architecture of CNNs is inherently at odds with the introduction of early exits. CNNs learn the data representation and the classiï¬ er jointly, which leads to two problems with early exits: 1. The features in the last layer are extracted directly to be used by the classiï¬ er, whereas earlier features are not. The inherent dilemma is that different kinds of features need to be extracted depending on how many layers are left until the classiï¬ | 1703.09844#2 | 1703.09844#4 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#4 | Multi-Scale Dense Networks for Resource Efficient Image Classification | cation. 2. The features in different layers of the network may have different scale. Typically, the ï¬ rst layers of a deep nets operate on a ï¬ ne scale (to extract low-level features), whereas later layers transition (through pooling or strided convolution) to coarse scales that allow global context to enter the classiï¬ er. Both scales are needed but happen at different places in the network. We propose a novel network architecture that addresses both of these problems through careful design changes, allowing for resource-efï¬ cient image classiï¬ cation. Our network uses a cascade of intermediate classiï¬ ers throughout the network. The ï¬ rst problem, of classiï¬ ers altering the internal representation, is addressed through the introduction of dense connectivity (Huang et al., 2017). By connecting all layers to all classiï¬ ers, features are no longer dominated by the most imminent early- exit and the trade-off between early or later classiï¬ cation can be performed elegantly as part of the loss function. The second problem, the lack of coarse-scale features in early layers, is addressed by adopting a multi-scale network structure. At each layer we produce features of all scales (ï¬ ne-to- coarse), which facilitates good classiï¬ cation early on but also extracts low-level features that only become useful after several more layers of processing. Our network architecture is illustrated in Figure 2, and we refer to it as Multi-Scale DenseNet (MSDNet). We evaluate MSDNets on three image-classiï¬ cation datasets. In the anytime classiï¬ cation setting, we show that it is possible to provide the ability to output a prediction at any time while maintain high accuracies throughout. In the budget batch classiï¬ cation setting we show that MSDNets can be effectively used to adapt the amount of computation to the difï¬ culty of the example to be classiï¬ ed, which allows us to reduce the computational requirements of our models drastically whilst perform- ing on par with state-of-the-art CNNs in terms of overall classiï¬ cation accuracy. To our knowledge this is the ï¬ | 1703.09844#3 | 1703.09844#5 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#5 | Multi-Scale Dense Networks for Resource Efficient Image Classification | rst deep learning architecture of its kind that allows dynamic resource adaptation with a single model and obtains competitive results throughout. 2Source: https://opensignal.com/reports/2015/08/android-fragmentation/ 3https://en.wikipedia.org/wiki/Google_Images 2 Published as a conference paper at ICLR 2018 s fam) wee xt £0) AC) features classifier regular conv - 3 eae ONY ea? one h(-) + see layer concatenation strided conv identity Figure 2: Illustration of the ï¬ rst four layers of an MSDNet with three scales. The horizontal direction cor- responds to the layer direction (depth) of the network. The vertical direction corresponds to the scale of the feature maps. Horizontal arrows indicate a regular convolution operation, whereas diagonal and vertical arrows indicate a strided convolution operation. | 1703.09844#4 | 1703.09844#6 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#6 | Multi-Scale Dense Networks for Resource Efficient Image Classification | Classiï¬ ers only operate on feature maps at the coarsest scale. Connec- tions across more than one layer are not drawn explicitly: they are implicit through recursive concatenations. # 2 RELATED WORK We brieï¬ y review related prior work on computation-efï¬ cient networks, memory-efï¬ cient networks, and resource-sensitive machine learning, from which our network architecture draws inspiration. Computation-efï¬ cient networks. Most prior work on (convolutional) networks that are computa- tionally efï¬ cient at test time focuses on reducing model size after training. In particular, many stud- ies propose to prune weights (LeCun et al., 1989; Hassibi et al., 1993; Li et al., 2017) or quantize weights (Hubara et al., 2016; Rastegari et al., 2016) during or after training. These approaches are generally effective because deep networks often have a substantial number of redundant weights that can be pruned or quantized without sacriï¬ cing (and sometimes even improving) performance. Prior work also studies approaches that directly learn compact models with less parameter redundancy. For example, the knowledge-distillation method (Bucilua et al., 2006; Hinton et al., 2014) trains small student networks to reproduce the output of a much larger teacher network or ensemble. Our work differs from those approaches in that we train a single model that trades off computation for accuracy at test time without any re-training or ï¬ netuning. Indeed, weight pruning and knowledge distillation can be used in combination with our approach, and may lead to further improvements. Resource-efï¬ cient machine learning. Various prior studies explore computationally efï¬ cient vari- ants of traditional machine-learning models (Viola & Jones, 2001; Grubb & Bagnell, 2012; Karayev et al., 2014; Trapeznikov & Saligrama, 2013; Xu et al., 2012; 2013; Nan et al., 2015; Wang et al., 2015). Most of these studies focus on how to incorporate the computational requirements of com- puting particular features in the training of machine-learning models such as (gradient-boosted) decision trees. | 1703.09844#5 | 1703.09844#7 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#7 | Multi-Scale Dense Networks for Resource Efficient Image Classification | Whilst our study is certainly inspired by these results, the architecture we explore differs substantially: most prior work exploits characteristics of machine-learning models (such as decision trees) that do not apply to deep networks. Our work is possibly most closely related to recent work on FractalNets (Larsson et al., 2017), which can perform anytime prediction by pro- gressively evaluating subnetworks of the full network. FractalNets differ from our work in that they are not explicitly optimized for computation efï¬ ciency and consequently our experiments show that MSDNets substantially outperform FractalNets. Our dynamic evaluation strategy for reducing batch computational cost is closely related to the the adaptive computation time approach (Graves, 2016; Figurnov et al., 2016), and the recently proposed method of adaptively evaluating neural networks (Bolukbasi et al., 2017). Different from these works, our method adopts a specially designed net- work with multiple classiï¬ ers, which are jointly optimized during training and can directly output conï¬ dence scores to control the evaluation process for each test example. The adaptive computation time method (Graves, 2016) and its extension (Figurnov et al., 2016) also perform adaptive eval- uation on test examples to save batch computational cost, but focus on skipping units rather than layers. In (Odena et al., 2017), a â composerâ | 1703.09844#6 | 1703.09844#8 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#8 | Multi-Scale Dense Networks for Resource Efficient Image Classification | model is trained to construct the evaluation network from a set of sub-modules for each test example. By contrast, our work uses a single CNN with multiple intermediate classiï¬ ers that is trained end-to-end. The Feedback Networks (Zamir et al., 2016) enable early predictions by making predictions in a recurrent fashion, which heavily shares parameters among classiï¬ ers, but is less efï¬ cient in sharing computation. Related network architectures. Our network architecture borrows elements from neural fabrics (Saxena & Verbeek, 2016) and others (Zhou et al., 2015; Jacobsen et al., 2017; Ke et al., 2016) | 1703.09844#7 | 1703.09844#9 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#9 | Multi-Scale Dense Networks for Resource Efficient Image Classification | 3 Published as a conference paper at ICLR 2018 Relative accuracy of the intermediate classifier Relative accuracy of the final classifier Lo} <p he 1.00 ,O9F uc 4 0.98 4 Bos . â f z 0.87 ? â 4 4 L , = 0.96 4 â , é OTF 2 4 ry â fal Z 0.94 4 4 S 0.6} = + . 0.92 © MSDNet (with intermediate classifier) |7 ost H © DenseNet (with intermediate classifier) 0.90 @â ® ResNet (with intermediate classifier) [4 0.0 02 04 06 0.8 10 0.0 02 04 0.6 08 10 location of intermediate classifier (relative to full depth) location of intermediate classifier (relative to full depth) | 1703.09844#8 | 1703.09844#10 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#10 | Multi-Scale Dense Networks for Resource Efficient Image Classification | # A 5 a # oe 5 # S So Figure 3: Relative accuracy of the intermediate classiï¬ er (left) and the ï¬ nal classiï¬ er (right) when introducing a single intermediate classiï¬ er at different layers in a ResNet, DenseNet and MSDNet. All experiments were performed on the CIFAR-100 dataset. Higher is better. to rapidly construct a low-resolution feature map that is amenable to classiï¬ cation, whilst also maintaining feature maps of higher resolution that are essential for obtaining high classiï¬ cation accuracy. Our design differs from the neural fabrics (Saxena & Verbeek, 2016) substantially in that MSDNets have a reduced number of scales and no sparse channel connectivity or up-sampling paths. MSDNets are at least one order of magnitude more efï¬ cient and typically more accurate â for example, an MSDNet with less than 1 million parameters obtains a test error below 7.0% on CIFAR-10 (Krizhevsky & Hinton, 2009), whereas Saxena & Verbeek (2016) report 7.43% with over 20 million parameters. We use the same feature-concatenation approach as DenseNets (Huang et al., 2017), which allows us to bypass features optimized for early classiï¬ ers in later layers of the network. Our architecture is related to deeply supervised networks (Lee et al., 2015) in that it incorporates classiï¬ ers at multiple layers throughout the network. In contrast to all these prior architectures, our network is speciï¬ cally designed to operate in resource-aware settings. # 3 PROBLEM SETUP We consider two settings that impose computational constraints at prediction time. Anytime prediction. In the anytime prediction setting (Grubb & Bagnell, 2012), there is a ï¬ nite computational budget B > 0 available for each test example x. The computational budget is nonde- terministic, and varies per test instance. It is determined by the occurrence of an event that requires the model to output a prediction immediately. We assume that the budget is drawn from some joint distribution P (x, B). In some applications P (B) may be independent of P (x) and can be estimated. | 1703.09844#9 | 1703.09844#11 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#11 | Multi-Scale Dense Networks for Resource Efficient Image Classification | For example, if the event is governed by a Poisson process, P (B) is an exponential distribution. We denote the loss of a model f (x) that has to produce a prediction for instance x within budget B by L(f (x), B). The goal of an anytime learner is to minimize the expected loss under the budget dis- tribution: L(f ) = E [L(f (x), B)]P (x,B). Here, L( ) denotes a suitable loss function. As is common · in the empirical risk minimization framework, the expectation under P (x, B) may be estimated by an average over samples from P (x, B). Budgeted batch classiï¬ cation. classify a set of examples x1, . . . , xM } is known in advance. The learner aims to minimize the loss across all examples in cumulative cost bounded by B, which we denote by L(f ( ). It can potentially do so by spending less than B L( · whilst using more than B B considered here is a soft constraint when we have a large batch of testing samples. # 4 MULTI-SCALE DENSE CONVOLUTIONAL NETWORKS A straightforward solution to the two problems introduced in Section 3 is to train multiple networks of increasing capacity, and sequentially evaluate them at test time (as in Bolukbasi et al. (2017)). In the anytime setting the evaluation can be stopped at any point and the most recent prediction is returned. In the batch setting, the evaluation is stopped prematurely the moment a network classiï¬ | 1703.09844#10 | 1703.09844#12 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#12 | Multi-Scale Dense Networks for Resource Efficient Image Classification | es 4 Published as a conference paper at ICLR 2018 the test sample with sufï¬ cient conï¬ dence. When the resources are so limited that the execution is terminated after the ï¬ rst network, this approach is optimal because the ï¬ rst network is trained for exactly this computational budget without compromises. However, in both settings, this scenario is rare. In the more common scenario where some test samples can require more processing time than others the approach is far from optimal because previously learned features are never re-used across the different networks. An alternative solution is to build a deep network with a cascade of classiï¬ ers operating on the features of internal layers: in such a network features computed for an earlier classiï¬ er can be re-used by later classiï¬ ers. However, na¨ıvely attaching intermediate early-exit classiï¬ ers to a state- of-the-art deep network leads to poor performance. There are two reasons why intermediate early-exit classiï¬ ers hurt the performance of deep neural networks: early classiï¬ ers lack coarse-level features and classiï¬ ers throughout interfere with the feature generation process. In this section we investigate these effects empirically (see Figure 3) and, in response to our ï¬ ndings, propose the MSDNet architecture illustrated in Figure 2. Problem: The lack of coarse-level features. Traditional neural networks learn features of ï¬ ne scale in early layers and coarse scale in later layers (through repeated convolution, pooling, and strided convolution). Coarse scale features in the ï¬ nal layers are important to classify the content of the whole image into a single class. Early layers lack coarse-level features and early-exit clas- siï¬ ers attached to these layers will likely yield unsatisfactory high error rates. To illustrate this point, we attached4 intermediate classiï¬ ers to varying layers of a ResNet (He et al., 2016) and a DenseNet (Huang et al., 2017) on the CIFAR-100 dataset (Krizhevsky & Hinton, 2009). The blue and red dashed lines in the left plot of Figure 3 show the relative accuracies of these classiï¬ ers. All three plots gives rise to a clear trend: the accuracy of a classiï¬ er is highly correlated with its position within the network. | 1703.09844#11 | 1703.09844#13 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#13 | Multi-Scale Dense Networks for Resource Efficient Image Classification | Particularly in the case of the ResNet (blue line), one can observe a visible â staircaseâ pattern, with big improvements after the 2nd and 4th classiï¬ ers â located right after pooling layers. Solution: Multi-scale feature maps. To address this issue, MSDNets maintain a feature repre- sentation at multiple scales throughout the network, and all the classiï¬ ers only use the coarse-level features. The feature maps at a particular layer5 and scale are computed by concatenating the re- sults of one or two convolutions: 1. the result of a regular convolution applied on the same-scale features from the previous layer (horizontal connections) and, if possible, 2. the result of a strided convolution applied on the ï¬ ner-scale feature map from the previous layer (diagonal connections). The horizontal connections preserve and progress high-resolution information, which facilitates the construction of high-quality coarse features in later layers. The vertical connections produce coarse features throughout that are amenable to classiï¬ | 1703.09844#12 | 1703.09844#14 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#14 | Multi-Scale Dense Networks for Resource Efficient Image Classification | cation. The dashed black line in Figure 3 shows that MSDNets substantially increase the accuracy of early classiï¬ ers. Problem: Early classiï¬ ers interfere with later classiï¬ ers. The right plot of Figure 3 shows the accuracies of the ï¬ nal classiï¬ er as a function of the location of a single intermediate classiï¬ er, relative to the accuracy of a network without intermediate classiï¬ ers. The results show that the introduction of an intermediate classiï¬ er harms the ï¬ nal ResNet classiï¬ er (blue line), reducing its accuracy by up to 7%. We postulate that this accuracy degradation in the ResNet may be caused by the intermediate classiï¬ er inï¬ uencing the early features to be optimized for the short-term and not for the ï¬ nal layers. This improves the accuracy of the immediate classiï¬ er but collapses information required to generate high quality features in later layers. This effect becomes more pronounced when the ï¬ rst classiï¬ er is attached to an earlier layer. Solution: Dense connectivity. By contrast, the DenseNet (red line) suffers much less from this effect. Dense connectivity (Huang et al., 2017) connects each layer with all subsequent layers and allows later layers to bypass features optimized for the short-term, to maintain the high accuracy of the ï¬ | 1703.09844#13 | 1703.09844#15 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#15 | Multi-Scale Dense Networks for Resource Efficient Image Classification | nal classiï¬ er. If an earlier layer collapses information to generate short-term features, the lost information can be recovered through the direct connection to its preceding layer. The ï¬ nal classiï¬ erâ s performance becomes (more or less) independent of the location of the intermediate 4We select six evenly spaced locations for each of the networks to introduce the intermediate classiï¬ er. Both the ResNet and DenseNet have three resolution blocks; each block offers two tentative locations for the intermediate classiï¬ er. The loss of the intermediate and ï¬ nal classiï¬ ers are equally weighted. 5Here, we use the term â layerâ to refer to a column in Figure 2. 5 | 1703.09844#14 | 1703.09844#16 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#16 | Multi-Scale Dense Networks for Resource Efficient Image Classification | Published as a conference paper at ICLR 2018 or directly indirectly not ze f=1 0=2 (=3 t=4 connected connected connected Figure 4: The output x? of layer @ at the s" scale in a MSDNet. Herein, [...] denotes the concatenation operator, 7 (-) a regular convolution transformation, and h;(-) a strided convolutional. Note that the outputs of he and hj have the same feature map size; their outputs are concatenated along the channel dimension. | 1703.09844#15 | 1703.09844#17 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#17 | Multi-Scale Dense Networks for Resource Efficient Image Classification | classiï¬ er. As far as we know, this is the ï¬ rst paper that discovers that dense connectivity is an important element to early-exit classiï¬ ers in deep networks, and we make it an integral design choice in MSDNets. 4.1 THE MSDNET ARCHITECTURE The MSDNet architecture is illustrated in Figure 2. We present its main components below. Addi- tional details on the architecture are presented in Appendix A. First layer. The first layer (¢= 1) is unique as it includes vertical connections in Figure[2] Its main purpose is to â seedâ representations on all S scales. One could view its vertical layout as a miniature â S-layersâ convolutional network (S=3 in Figure [2p. Let us denote the output feature maps at layer 2 and scale s as x# and the original input image as x}. Feature maps at coarser scales are obtained via down-sampling. The output x} of the first layer is formally given in the top row of Figure[4] Subsequent layers. Following ), the output feature maps xj produced at subse- quent layers, ¢> 1, and scales, s, are a concatenation of transformed feature maps from all previous feature maps of scale s and s â 1 (if s > 1). Formally, the ¢-th vel of our network outputs a set of features at S scales {x}, see xP}, given in the last row of Figure|4| Classifiers. The classifiers in MSDNets also follow the dense connectivity pattern within the coars- est scale, S, i.e., the classifier at layer @ uses all the features [x?, sey x?]. Each classifier consists of two convolutional layers, followed by one average pooling layer and one linear layer. In prac- tice, we only attach classifiers to some of the intermediate layers, and we let f,(-) denote the kâ ¢ classifier. During testing in the anytime setting we propagate the input through the network until the budget is exhausted and output the most recent prediction. In the batch budget setting at test time, an example traverses the network and exits after classifier f), if its prediction confidence (we use the maximum value of the softmax probability as a confidence measure) exceeds a pre-determined threshold 0,. | 1703.09844#16 | 1703.09844#18 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#18 | Multi-Scale Dense Networks for Resource Efficient Image Classification | Before training, we compute the computational cost, C,, required to process the net- work up to the k" classifier. We denote by 0 <q < 1 a fixed exit probability that a sample that reaches a classifier will obtain a classification with sufficient confidence to exit. We assume that q is constant across all layers, which allows us to compute the probability that a sample exits at classifier kas: qx = 2(1â q)*~1q, where z is a normalizing constant that ensures that )>,, p(qx) = 1. At test time, we need to ensure that the overall cost of classifying all samples in D;..,, does not exceed our budget B (in expectation). This gives rise to the constraint |Dyest| }>, dekCk < B. We can solve this constraint for g and determine the thresholds 6;, on a validation set in such a way that approximately |Dtest|qx Validation samples exit at the k" classifier. Loss functions. During training we use cross entropy loss functions L(f;,) for all classifiers and minimize a weighted cumulative loss: Bl Ucxyyed Uk WkL (fe). Herein, D denotes the training set and w; > 0 the weight of the k-th classifier. If the budget distribution P(B) is known, we can use the weights w;, to incorporate our prior knowledge about the budget B in the learning. Empirically, we find that using the same weight for all loss functions (i.e., setting Vk : wz = 1) works well in practice. Network reduction and lazy evaluation. There are two straightforward ways to further reduce the computational requirements of MSDNets. First, it is inefï¬ cient to maintain all the ï¬ ner scales until 6 | 1703.09844#17 | 1703.09844#19 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#19 | Multi-Scale Dense Networks for Resource Efficient Image Classification | Published as a conference paper at ICLR 2018 the last layer of the network. One simple strategy to reduce the size of the network is by splitting it into S blocks along the depth dimension, and only keeping the coarsest (' â i + 1) scales in the iâ block (a schematic layout of this structure is shown in[Figure 9p. This reduces computational cost for both training and testing. Every time a scale is removed from the network, we add a transition layer between the two blocks that merges the concatenated features using a 1 x 1 convolution and cuts the number of channels in half before feeding the fine-scale features into the coarser scale via a strided convolution (this is similar to the DenseNet-BC architecture of|Huang et al.|(2017)). Second, since a classifier at layer ¢ only uses features from the coarsest scale, the finer feature maps in layer ¢ (and some of the finer feature maps in the previous Sâ 2 layers) do not influence the prediction of that classifier. Therefore, we group the computation in â diagonal blocksâ such that we only propagate the example along paths that are required for the evaluation of the next classifier. This minimizes unnecessary computations when we need to stop because the computational budget is exhausted. We call this strategy lazy evaluation. | 1703.09844#18 | 1703.09844#20 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#20 | Multi-Scale Dense Networks for Resource Efficient Image Classification | # 5 EXPERIMENTS We evaluate the effectiveness of our approach on three image classiï¬ cation datasets, i.e., the CIFAR- 10, CIFAR-100 (Krizhevsky & Hinton, 2009) and ILSVRC 2012 (ImageNet; Deng et al. (2009)) datasets. Code to reproduce all results is available at https://anonymous-url. Details on architectural conï¬ gurations of MSDNets are described in Appendix A. Datasets. The two CIFAR datasets contain 50, 000 training and 10, 000 test images of 32 32 pixels; we hold out 5, 000 training images as a validation set. The datasets comprise 10 and 100 classes, respectively. We follow He et al. (2016) and apply standard data-augmentation techniques to the training images: images are zero-padded with 4 pixels on each side, and then randomly cropped to produce 32 32 images. Images are ï¬ ipped horizontally with probability 0.5, and normalized by subtracting channel means and dividing by channel standard deviations. The ImageNet dataset comprises 1, 000 classes, with a total of 1.2 million training images and 50,000 validation images. We hold out 50,000 images from the training set to estimate the conï¬ dence threshold for classiï¬ | 1703.09844#19 | 1703.09844#21 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#21 | Multi-Scale Dense Networks for Resource Efficient Image Classification | ers in MSDNet. We adopt the data augmentation scheme of He et al. (2016) at training time; at test time, we classify a 224 224 center crop of images that were resized to 256 Training Details. We train all models using the framework of Gross & Wilber (2016). On the two CIFAR datasets, all models (including all baselines) are trained using stochastic gradient descent (SGD) with mini-batch size 64. We use Nesterov momentum with a momentum weight of 0.9 without dampening, and a weight decay of 10â | 1703.09844#20 | 1703.09844#22 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#22 | Multi-Scale Dense Networks for Resource Efficient Image Classification | 4. All models are trained for 300 epochs, with an initial learning rate of 0.1, which is divided by a factor 10 after 150 and 225 epochs. We apply the same optimization scheme to the ImageNet dataset, except that we increase the mini-batch size to 256, and all the models are trained for 90 epochs with learning rate drops after 30 and 60 epochs. Ã Ã 5.1 ANYTIME PREDICTION In the anytime prediction setting, the model maintains a progressively updated distribution over classes, and it can be forced to output its most up-to-date prediction at an arbitrary time. Baselines. There exist several baseline approaches for anytime prediction: FractalNets (Larsson et al., 2017), deeply supervised networks (Lee et al., 2015), and ensembles of deep networks of varying or identical sizes. FractalNets allow for multiple evaluation paths during inference time, which vary in computation time. In the anytime setting, paths are evaluated in order of increasing computation. | 1703.09844#21 | 1703.09844#23 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#23 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In our result ï¬ gures, we replicate the FractalNet results reported in the original paper (Larsson et al., 2017) for reference. Deeply supervised networks introduce multiple early-exit classi- ï¬ ers throughout a network, which are applied on the features of the particular layer they are attached to. Instead of using the original model proposed in Lee et al. (2015), we use the more competitive ResNet and DenseNet architectures (referred to as DenseNet-BC in Huang et al. (2017)) as the base networks in our experiments with deeply supervised networks. We refer to these as ResNetMC and DenseNetMC, where M C stands for multiple classiï¬ | 1703.09844#22 | 1703.09844#24 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#24 | Multi-Scale Dense Networks for Resource Efficient Image Classification | ers. Both networks require about 1.3 108 FLOPs when fully evaluated; the detailed network conï¬ gurations are presented in the supplemen- tary material. In addition, we include ensembles of ResNets and DenseNets of varying or identical sizes. At test time, the networks are evaluated sequentially (in ascending order of network size) to obtain predictions for the test data. All predictions are averaged over the evaluated classiï¬ ers. On 7 | 1703.09844#23 | 1703.09844#25 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#25 | Multi-Scale Dense Networks for Resource Efficient Image Classification | Published as a conference paper at ICLR 2018 Anytime prediction on ImageNet Anytime prediction on CIFAR-100 ee MSDNet oo _ z 66 â MSDNet Ensemble of ResNets (varying depth) 50 F 64 62 tan Ensemble of Denseâ 60 . T r n a) 45 L L L 0.0 04 0.6 08 1.0 12 1d 0.0 0.2 04 0.6 1.0 12 10 14 budget (in MUL-ADD) x0) budget (in MUL-ADD) x108 Figure 5: Accuracy (top-1) of anytime prediction models as a function of computational budget on the ImageNet (left) and CIFAR-100 (right) datasets. Higher is better. ImageNet, we compare MSDNet against a highly competitive ensemble of ResNets and DenseNets, with depth varying from 10 layers to 50 layers, and 36 layers to 121 layers, respectively. Anytime prediction results are presented in Figure 5. The left plot shows the top-1 classiï¬ cation accuracy on the ImageNet validation set. Here, for all budgets in our evaluation, the accuracy of MSDNet substantially outperforms the ResNets and DenseNets ensemble. In particular, when the 8% higher accuracy. budget ranges from 0.1 | 1703.09844#24 | 1703.09844#26 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#26 | Multi-Scale Dense Networks for Resource Efficient Image Classification | à à ⠼ â We evaluate more baselines on CIFAR-100 (and CIFAR-10; see supplementary materials). We observe that MSDNet substantially outperforms ResNetsMC and DenseNetsMC at any computational budget within our range. This is due to the fact that after just a few layers, MSDNets have produced low-resolution feature maps that are much more suitable for classiï¬ cation than the high-resolution feature maps in the early layers of ResNets or DenseNets. MSDNet also outperforms the other baselines for nearly all computational budgets, although it performs on par with ensembles when the budget is very small. In the extremely low-budget regime, ensembles have an advantage because their predictions are performed by the ï¬ rst (small) network, which is optimized exclusively for the low budget. However, the accuracy of ensembles does not increase nearly as fast when the budget is increased. The MSDNet outperforms the ensemble as soon as the latter needs to evaluate a second model: unlike MSDNets, this forces the ensemble to repeat the computation of similar low-level features repeatedly. Ensemble accuracies saturate rapidly when all networks are shallow. | 1703.09844#25 | 1703.09844#27 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#27 | Multi-Scale Dense Networks for Resource Efficient Image Classification | 5.2 BUDGETED BATCH CLASSIFICATION In budgeted batch classiï¬ cation setting, the predictive model receives a batch of M instances and a computational budget B for classifying all M instances. In this setting, we use dynamic evaluation: we perform early-exiting of â easyâ examples at early classiï¬ ers whilst propagating â hardâ examples through the entire network, using the procedure described in Section 4. Baselines. On ImageNet, we compare the dynamically evaluated MSDNet with ï¬ ve ResNets (He et al., 2016) and ï¬ ve DenseNets (Huang et al., 2017), AlexNet (Krizhevsky et al., 2012), and Google- LeNet (Szegedy et al., 2015); see the supplementary material for details. We also evaluate an ensem- ble of the ï¬ ve ResNets that uses exactly the same dynamic-evaluation procedure as MSDNets at test time: â easyâ images are only propagated through the smallest ResNet-10, whereas â hardâ images are classiï¬ ed by all ï¬ | 1703.09844#26 | 1703.09844#28 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#28 | Multi-Scale Dense Networks for Resource Efficient Image Classification | ve ResNet models (predictions are averaged across all evaluated networks in the ensemble). We classify batches of M = 128 images. On CIFAR-100, we compare MSDNet with several highly competitive baselines, including ResNets (He et al., 2016), DenseNets (Huang et al., 2017) of varying sizes, Stochastic Depth Net- works (Huang et al., 2016), Wide ResNets (Zagoruyko & Komodakis, 2016) and FractalNets (Lars- son et al., 2017). We also compare MSDNet to the ResNetMC and DenseNetMC models that were used in Section 5.1, using dynamic evaluation at test time. We denote these baselines as ResNetMC / DenseNetMC with early-exits. To prevent the result plots from becoming too cluttered, we present CIFAR-100 results with dynamically evaluated ensembles in the supplementary material. We clas- sify batches of M = 256 images at test time. Budgeted batch classiï¬ cation results on ImageNet are shown in the left panel of Figure 7. We trained three MSDNets with different depths, each of which covers a different range of compu- | 1703.09844#27 | 1703.09844#29 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#29 | Multi-Scale Dense Networks for Resource Efficient Image Classification | 8 Published as a conference paper at ICLR 2018 7 Budgeted batch classification on ImageNet Budgeted batch classification on CIFAR-100 * ResNet-H0 MSDNet with dynamic evaluation NSDNet with dynamic evaluation ensemble of Re © © MSDNet w/o dynamic evaluation sit ensemble of DenseNets Reset! with carlyenits ons â DenseNet⠢© with early-exits s (He et al., 2015) lm M ResNets (He et al., 2015) x @-© DenseNets (Huang et al., 2016) al., 2016) al, 2016) 016) 0 1 2 3 4 5 00S 10 15 2.0 25 average budget (in MUL-ADD) x1? average budget (in MUL-ADD) x1? Figure 7: Accuracy (top-1) of budgeted batch classiï¬ cation models as a function of average computational budget per image the on ImageNet (left) and CIFAR-100 (right) datasets. Higher is better. tational budgets. We plot the performance of each MSDNet as a gray curve; we select the best model for each budget based on its accuracy on the validation set, and plot the corresponding ac- curacy as a black curve. The plot shows that the predictions of MSDNets with dynamic evaluation are substantially more accurate than those of ResNets and DenseNets that use the same amount of 109 FLOPs, MSDNet achieves a top-1 computation. For instance, with an average budget of 1.7 6% higher than that achieved by a ResNet with the same number of accuracy of times fewer FLOPs. Compared to the computationally efï¬ cient DenseNets, MSDNet uses FLOPs to achieve the same classiï¬ cation accuracy. Moreover, MSDNet with dynamic evaluation allows for very precise tuning of the computational budget that is consumed, which is not possible with individual ResNet or DenseNet models. The ensemble of ResNets or DenseNets with dynamic evaluation performs on par with or worse than their individual counterparts (but they do allow for setting the computational budget very precisely). The right panel of Figure 7 shows our results on CIFAR-100. | 1703.09844#28 | 1703.09844#30 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#30 | Multi-Scale Dense Networks for Resource Efficient Image Classification | The results show that MSDNets con- sistently outperform all baselines across all budgets. Notably, MSDNet performs on par with a 110- layer ResNet using only 1/10th of the computational budget and it is up to 5 times more efï¬ cient than DenseNets, Stochastic Depth Networks, Wide ResNets, and FractalNets. Similar to results in the anytime-prediction setting, MSDNet substantially outperform ResNetsM C and DenseNetsM C with multiple intermediate classiï¬ ers, which provides further evidence that the coarse features in the MSDNet are important for high performance in earlier layers. Visualization. To illustrate the ability of our ap- proach to reduce the computational requirements for classifying â easyâ examples, we show twelve randomly sampled test images from two Ima- geNet classes in Figure 6. The top row shows â easyâ examples that were correctly classiï¬ ed and exited by the ï¬ rst classiï¬ er. The bottom row shows â hardâ examples that would have been in- correctly classiï¬ ed by the ï¬ rst classiï¬ er but were passed on because its uncertainty was too high. The ï¬ gure suggests that early classiï¬ ers recog- nize prototypical class examples, whereas the last classiï¬ er recognizes non-typical images. _ f (a) Red wine (b) Volcano sy ws ~â Figure 6: Sampled images from the ImageNet classes Red wine and Volcano. Top row: images exited from the ï¬ rst classiï¬ er of a MSDNet with correct predic- tion; Bottom row: images failed to be correctly clas- siï¬ ed at the ï¬ rst classiï¬ er but were correctly pre- dicted and exited at the last layer. 5.3 MORE COMPUTATIONALLY EFFICIENT DENSENETS Here, we discuss an interesting ï¬ nding during our exploration of the MSDNet architecture. We found that following the DenseNet structure to design our network, i.e., by keeping the number of output channels (or growth rate) the same at all scales, did not lead to optimal results in terms of the accuracy-speed trade-off. | 1703.09844#29 | 1703.09844#31 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#31 | Multi-Scale Dense Networks for Resource Efficient Image Classification | The main reason for this is that compared to network architectures like ResNets, the DenseNet structure tends to apply more ï¬ lters on the high-resolution feature maps in the network. This helps to reduce the number of parameters in the model, but at the same time, it greatly increases the computational cost. We tried to modify DenseNets by doubling the growth rate 9 Published as a conference paper at ICLR 2018 Anytime prediction on CIFAR-100 Batch computational learning on CIFAR-100 7s : : : ee â MSDNet with early-exits H 8 Del s (Huang et al., 2016) bom De st lik. . J} i: 0.0 0.2 04 0. 6 2 0.0 0.5, 10 15 2.0 2.5 budget (in MUL- ADD) x10® average budget (in MUL-ADD) x10° Figure 8: Test accuracy of DenseNet* on CIFAR-100 under the anytime learning setting (left) and the budgeted batch setting (right). after each transition layer, so that more ï¬ lters are applied to low-resolution feature maps. It turns out that the resulting network, which we denote as DenseNet*, signiï¬ cantly outperform the original DenseNet in terms of computational efï¬ ciency. We experimented with DenseNet* in our two settings with test time budget constraints. The left panel of Figure 8 shows the anytime prediction performance of an ensemble of DenseNets* of vary- ing depths. It outperforms the ensemble of original DenseNets of varying depth by a large margin, but is still slightly worse than MSDNets. In the budgeted batch budget setting, DenseNet* also leads to signiï¬ cantly higher accuracy over its counterpart under all budgets, but is still substantially outperformed by MSDNets. # 6 CONCLUSION We presented the MSDNet, a novel convolutional network architecture, optimized to incorporate CPU budgets at test-time. Our design is based on two high-level design principles, to generate and maintain coarse level features throughout the network and to inter-connect the layers with dense connectivity. The former allows us to introduce intermediate classiï¬ ers even at early layers and the latter ensures that these classiï¬ | 1703.09844#30 | 1703.09844#32 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#32 | Multi-Scale Dense Networks for Resource Efficient Image Classification | ers do not interfere with each other. The ï¬ nal design is a two dimensional array of horizontal and vertical layers, which decouples depth and feature coarseness. Whereas in traditional convolutional networks features only become coarser with increasing depth, the MSDNet generates features of all resolutions from the ï¬ rst layer on and maintains them through- out. The result is an architecture with an unprecedented range of efï¬ ciency. A single network can outperform all competitive baselines on an impressive range of computational budgets ranging from highly limited CPU constraints to almost unconstrained settings. As future work we plan to investigate the use of resource-aware deep architectures beyond object classiï¬ cation, e.g. image segmentation (Long et al., 2015). Further, we intend to explore approaches that combine MSDNets with model compression (Chen et al., 2015; Han et al., 2015), spatially adaptive computation (Figurnov et al., 2016) and more efï¬ cient convolution operations (Chollet, 2016; Howard et al., 2017) to further improve computational efï¬ ciency. ACKNOWLEDGMENTS The authors are supported in part by grants from the National Science Foundation ( III-1525919, IIS-1550179, IIS-1618134, S&AS 1724282, and CCF-1740822), the Ofï¬ ce of Naval Research DOD (N00014-17-1-2175), and the Bill and Melinda Gates Foundation. We are also thankful for generous support by SAP America Inc. # REFERENCES Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316, 2016. | 1703.09844#31 | 1703.09844#33 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#33 | Multi-Scale Dense Networks for Resource Efficient Image Classification | 10 Published as a conference paper at ICLR 2018 Tolga Bolukbasi, Joseph Wang, Ofer Dekel, and Venkatesh Saligrama. Adaptive neural networks for fast test-time prediction. arXiv preprint arXiv:1702.07811, 2017. Cristian Bucilua, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In ACM SIGKDD, pp. 535â 541. ACM, 2006. Wenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. | 1703.09844#32 | 1703.09844#34 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#34 | Multi-Scale Dense Networks for Resource Efficient Image Classification | Compressing neural networks with the hashing trick. In ICML, pp. 2285â 2294, 2015. Franc¸ois Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv preprint arXiv:1610.02357, 2016. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pp. 248â 255, 2009. Michael Figurnov, Maxwell D Collins, Yukun Zhu, Li Zhang, Jonathan Huang, Dmitry Vetrov, and Ruslan Salakhutdinov. Spatially adaptive computation time for residual networks. arXiv preprint arXiv:1612.02297, 2016. Alex Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983, 2016. Sam Gross and Michael Wilber. Training and investigating residual nets. 2016. URL http: //torch.ch/blog/2016/02/04/resnets.html. | 1703.09844#33 | 1703.09844#35 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#35 | Multi-Scale Dense Networks for Resource Efficient Image Classification | Alexander Grubb and Drew Bagnell. Speedboost: Anytime prediction with uniform near-optimality. In AISTATS, volume 15, pp. 458â 466, 2012. Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2015. Babak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network pruning. In IJCNN, pp. 293â 299, 1993. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. | 1703.09844#34 | 1703.09844#36 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#36 | Multi-Scale Dense Networks for Resource Efficient Image Classification | Delving deep into rectiï¬ ers: Surpassing human-level performance on imagenet classiï¬ cation. In ICCV, pp. 1026â 1034, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In CVPR, pp. 770â 778, 2016. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. In NIPS Deep Learning Workshop, 2014. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. | 1703.09844#35 | 1703.09844#37 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#37 | Multi-Scale Dense Networks for Resource Efficient Image Classification | Mobilenets: Efï¬ cient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In ECCV, pp. 646â 661. Springer, 2016. Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. In CVPR, 2017. Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. In NIPS, pp. 4107â 4115, 2016. Sergey Ioffe and Christian Szegedy. | 1703.09844#36 | 1703.09844#38 | 1703.09844 | [
"1702.07780"
]
|
1703.09844#38 | Multi-Scale Dense Networks for Resource Efficient Image Classification | Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, pp. 770â 778, 2015. J¨orn-Henrik Jacobsen, Edouard Oyallon, St´ephane Mallat, and Arnold WM Smeulders. Multiscale hierarchical convolutional networks. arXiv preprint arXiv:1703.04140, 2017. Sergey Karayev, Mario Fritz, and Trevor Darrell. | 1703.09844#37 | 1703.09844#39 | 1703.09844 | [
"1702.07780"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.