id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
1602.07360#22 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | In addition, these results demonstrate that Deep Compression (Han et al., 2015a) not only works well on CNN architectures with many parameters (e.g. AlexNet and VGG), but it is also able to compress the already compact, fully convolutional SqueezeNet architecture. Deep Compression compressed SqueezeNet by 10Ã while preserving the baseline accuracy. In summary: by combin- ing CNN architectural innovation (SqueezeNet) with state-of-the-art compression techniques (Deep Compression), we achieved a 510Ã reduction in model size with no decrease in accuracy compared to the baseline. Finally, note that Deep Compression (Han et al., 2015b) uses a codebook as part of its scheme for quantizing CNN parameters to 6- or 8-bits of precision. Therefore, on most commodity processors, it is not trivial to achieve a speedup of 32 6 = 5.3x with 6-bit quantization using the scheme developed in Deep Compression. However, Han et al. developed custom hardware â | 1602.07360#21 | 1602.07360#23 | 1602.07360 | [
"1512.00567"
] |
1602.07360#23 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Efï¬ cient Inference Engine (EIE) â that can compute codebook-quantized CNNs more efï¬ ciently (Han et al., 2016a). In addition, in the months since we released SqueezeNet, P. Gysel developed a strategy called Ristretto for linearly quantizing SqueezeNet to 8 bits (Gysel, 2016). Speciï¬ cally, Ristretto does computation in 8 bits, and it stores parameters and activations in 8-bit data types. Using the Ristretto strategy for 8-bit computation in SqueezeNet inference, Gysel observed less than 1 percentage-point of drop in accuracy when using 8-bit instead of 32-bit data types. # 5 CNN MICROARCHITECTURE DESIGN SPACE EXPLORATION So far, we have proposed architectural design strategies for small models, followed these principles to create SqueezeNet, and discovered that SqueezeNet is 50x smaller than AlexNet with equivalent accuracy. However, SqueezeNet and other models reside in a broad and largely unexplored design space of CNN architectures. Now, in Sections 5 and 6, we explore several aspects of the design space. We divide this architectural exploration into two main topics: microarchitectural exploration (per-module layer dimensions and conï¬ gurations) and macroarchitectural exploration (high-level end-to-end organization of modules and other layers). In this section, we design and execute experiments with the goal of providing intuition about the shape of the microarchitectural design space with respect to the design strategies that we proposed in Section 3.1. Note that our goal here is not to maximize accuracy in every experiment, but rather to understand the impact of CNN architectural choices on model size and accuracy. 6Note that, due to the storage overhead of storing sparse matrix indices, 33% sparsity leads to somewhat less than a 3à decrease in model size. 7 | 1602.07360#22 | 1602.07360#24 | 1602.07360 | [
"1512.00567"
] |
1602.07360#24 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | # Under review as a conference paper at ICLR 2017 Squeeze Ratio (SR) Percentage of 3x3 filters (pet;,) 0.1250.25 0.5 0.75 1.0 100 1.0 12.5 25.0 37.5 50.0 62.5 75.0 87.5 99.0 g 100/-â squeezeNet 85.3% 860% gS 85:3% 85.3 ne 80.3% accuracy accuracy ~ 76.3% accuracy accuracy > accuracy 2 30 accuracy g 80+ 13 MB of 19 MB of i g r ° 13 MB of : 21 MB of 3 â leah weights weights 3 5.7 MB of weights weights & 3 ight: © ot ® Gop weights 2 8 © 40h Q 4ob ot a 2 @ = 201 Z 201 a a E E = 0 ii : L : =o er 48 7.6 13 19 24 5.77.4 9.3 11 13 15 17 19 21 MB of weights in model MB of weights in model the of the ratio the of the ratio of 3x3 filters (a) Exploring the impact of the squeeze ratio (SR) on model size and accuracy. (b) Exploring the impact of the ratio of 3x3 ï¬ lters in expand layers (pct3x3) on model size and accuracy. Figure 3: Microarchitectural design space exploration. 5.1 CNN MICROARCHITECTURE METAPARAMETERS In SqueezeNet, each Fire module has three dimensional hyperparameters that we deï¬ ned in Sec- tion 3.2: s1x1, e1x1, and e3x3. SqueezeNet has 8 Fire modules with a total of 24 dimensional hyperparameters. To do broad sweeps of the design space of SqueezeNet-like architectures, we deï¬ ne the following set of higher level metaparameters which control the dimensions of all Fire modules in a CNN. We deï¬ ne basee as the number of expand ï¬ lters in the ï¬ | 1602.07360#23 | 1602.07360#25 | 1602.07360 | [
"1512.00567"
] |
1602.07360#25 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | rst Fire module in a CNN. After every f req Fire modules, we increase the number of expand ï¬ lters by incre. In other words, for Fire module i, the number of expand ï¬ lters is ei = basee + (incre â ). In the expand layer of a Fire module, some ï¬ lters are 1x1 and some are 3x3; we deï¬ ne ei = ei,1x1 + ei,3x3 with pct3x3 (in the range [0, 1], shared over all Fire modules) as the percentage of expand ï¬ lters that are 3x3. In other words, ei,3x3 = ei â pct3x3, and ei,1x1 = ei â (1 â pct3x3). Finally, we deï¬ ne the number of ï¬ lters in the squeeze layer of a Fire module using a metaparameter called the squeeze ratio (SR) (again, in the range [0, 1], shared by all Fire modules): si,1x1 = SR â ei (or equivalently si,1x1 = SR â (ei,1x1 + ei,3x3)). SqueezeNet (Table 1) is an example architecture that we gen- erated with the aforementioned set of metaparameters. Speciï¬ cally, SqueezeNet has the following metaparameters: basee = 128, incre = 128, pct3x3 = 0.5, f req = 2, and SR = 0.125. 5.2 SQUEEZE RATIO In Section 3.1, we proposed decreasing the number of parameters by using squeeze layers to decrease the number of input channels seen by 3x3 ï¬ lters. We deï¬ ned the squeeze ratio (SR) as the ratio between the number of ï¬ lters in squeeze layers and the number of ï¬ lters in expand layers. We now design an experiment to investigate the effect of the squeeze ratio on model size and accuracy. In these experiments, we use SqueezeNet (Figure 2) as a starting point. | 1602.07360#24 | 1602.07360#26 | 1602.07360 | [
"1512.00567"
] |
1602.07360#26 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | As in SqueezeNet, these experiments use the following metaparameters: basee = 128, incre = 128, pct3x3 = 0.5, and f req = 2. We train multiple models, where each model has a different squeeze ratio (SR)7 in the range [0.125, 1.0]. In Figure 3(a), we show the results of this experiment, where each point on the graph is an independent model that was trained from scratch. SqueezeNet is the SR=0.125 point in this ï¬ gure.8 From this ï¬ gure, we learn that increasing SR beyond 0.125 can further increase ImageNet top-5 accuracy from 80.3% (i.e. AlexNet-level) with a 4.8MB model to 86.0% with a 19MB model. Accuracy plateaus at 86.0% with SR=0.75 (a 19MB model), and setting SR=1.0 further increases model size without improving accuracy. 5.3 TRADING OFF 1X1 AND 3X3 FILTERS In Section 3.1, we proposed decreasing the number of parameters in a CNN by replacing some 3x3 ï¬ lters with 1x1 ï¬ lters. | 1602.07360#25 | 1602.07360#27 | 1602.07360 | [
"1512.00567"
] |
1602.07360#27 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | An open question is, how important is spatial resolution in CNN ï¬ lters? 7Note that, for a given model, all Fire layers share the same squeeze ratio. 8Note that we named it SqueezeNet because it has a low squeeze ratio (SR). That is, the squeeze layers in SqueezeNet have 0.125x the number of ï¬ lters as the expand layers. 8 # Under review as a conference paper at ICLR 2017 The VGG (Simonyan & Zisserman, 2014) architectures have 3x3 spatial resolution in most layersâ | 1602.07360#26 | 1602.07360#28 | 1602.07360 | [
"1512.00567"
] |
1602.07360#28 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | ï¬ lters; GoogLeNet (Szegedy et al., 2014) and Network-in-Network (NiN) (Lin et al., 2013) have 1x1 ï¬ lters in some layers. In GoogLeNet and NiN, the authors simply propose a speciï¬ c quantity of 1x1 and 3x3 ï¬ lters without further analysis.9 Here, we attempt to shed light on how the proportion of 1x1 and 3x3 ï¬ lters affects model size and accuracy. We use the following metaparameters in this experiment: basee = incre = 128, f req = 2, SR = 0.500, and we vary pct3x3 from 1% to 99%. | 1602.07360#27 | 1602.07360#29 | 1602.07360 | [
"1512.00567"
] |
1602.07360#29 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | In other words, each Fire moduleâ s expand layer has a predeï¬ ned number of ï¬ lters partitioned between 1x1 and 3x3, and here we turn the knob on these ï¬ lters from â mostly 1x1â to â mostly 3x3â . As in the previous experiment, these models have 8 Fire modules, following the same organization of layers as in Figure 2. We show the results of this experiment in Figure 3(b). Note that the 13MB models in Figure 3(a) and Figure 3(b) are the same architecture: SR = 0.500 and pct3x3 = 50%. We see in Figure 3(b) that the top-5 accuracy plateaus at 85.6% using 50% 3x3 ï¬ lters, and further increasing the percentage of 3x3 ï¬ lters leads to a larger model size but provides no improvement in accuracy on ImageNet. 6 CNN MACROARCHITECTURE DESIGN SPACE EXPLORATION So far we have explored the design space at the microarchitecture level, i.e. the contents of individual modules of the CNN. Now, we explore design decisions at the macroarchitecture level concerning the high-level connections among Fire modules. Inspired by ResNet (He et al., 2015b), we explored three different architectures: Vanilla SqueezeNet (as per the prior sections). â ¢ SqueezeNet with simple bypass connections between some Fire modules. (Inspired by (Sri- vastava et al., 2015; He et al., 2015b).) SqueezeNet with complex bypass connections between the remaining Fire modules. We illustrate these three variants of SqueezeNet in Figure 2. Our simple bypass architecture adds bypass connections around Fire modules 3, 5, 7, and 9, requiring these modules to learn a residual function between input and output. As in ResNet, to implement a bypass connection around Fire3, we set the input to Fire4 equal to (output of Fire2 + output of Fire3), where the + operator is elementwise addition. This changes the regularization applied to the parameters of these Fire modules, and, as per ResNet, can improve the ï¬ nal accuracy and/or ability to train the full model. | 1602.07360#28 | 1602.07360#30 | 1602.07360 | [
"1512.00567"
] |
1602.07360#30 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | One limitation is that, in the straightforward case, the number of input channels and number of output channels has to be the same; as a result, only half of the Fire modules can have simple bypass connections, as shown in the middle diagram of Fig 2. When the â same number of channelsâ requirement canâ t be met, we use a complex bypass connection, as illustrated on the right of Figure 2. While a simple bypass is â just a wire,â we deï¬ ne a complex bypass as a bypass that includes a 1x1 convolution layer with the number of ï¬ lters set equal to the number of output channels that are needed. Note that complex bypass connections add extra parameters to the model, while simple bypass connections do not. In addition to changing the regularization, it is intuitive to us that adding bypass connections would help to alleviate the representational bottleneck introduced by squeeze layers. In SqueezeNet, the squeeze ratio (SR) is 0.125, meaning that every squeeze layer has 8x fewer output channels than the accompanying expand layer. Due to this severe dimensionality reduction, a limited amount of in- formation can pass through squeeze layers. However, by adding bypass connections to SqueezeNet, we open up avenues for information to ï¬ ow around the squeeze layers. We trained SqueezeNet with the three macroarchitectures in Figure 2 and compared the accuracy and model size in Table 3. | 1602.07360#29 | 1602.07360#31 | 1602.07360 | [
"1512.00567"
] |
1602.07360#31 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | We ï¬ xed the microarchitecture to match SqueezeNet as described in Table 1 throughout the macroarchitecture exploration. Complex and simple bypass connections both yielded an accuracy improvement over the vanilla SqueezeNet architecture. Interestingly, the simple bypass enabled a higher accuracy accuracy improvement than complex bypass. Adding the 9To be clear, each ï¬ lter is 1x1xChannels or 3x3xChannels, which we abbreviate to 1x1 and 3x3. 9 | 1602.07360#30 | 1602.07360#32 | 1602.07360 | [
"1512.00567"
] |
1602.07360#32 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | # Under review as a conference paper at ICLR 2017 Table 3: SqueezeNet accuracy and model size using different macroarchitecture conï¬ gurations Top-1 Accuracy 57.5% 60.4% 58.8% Architecture Vanilla SqueezeNet SqueezeNet + Simple Bypass SqueezeNet + Complex Bypass Top-5 Accuracy Model Size 80.3% 82.5% 82.0% 4.8MB 4.8MB 7.7MB simple bypass connections yielded an increase of 2.9 percentage-points in top-1 accuracy and 2.2 percentage-points in top-5 accuracy without increasing model size. 7 CONCLUSIONS In this paper, we have proposed steps toward a more disciplined approach to the design-space explo- ration of convolutional neural networks. Toward this goal we have presented SqueezeNet, a CNN architecture that has 50à fewer parameters than AlexNet and maintains AlexNet-level accuracy on ImageNet. We also compressed SqueezeNet to less than 0.5MB, or 510à smaller than AlexNet without compression. Since we released this paper as a technical report in 2016, Song Han and his collaborators have experimented further with SqueezeNet and model compression. Using a new approach called Dense-Sparse-Dense (DSD) (Han et al., 2016b), Han et al. use model compres- sion during training as a regularizer to further improve accuracy, producing a compressed set of SqueezeNet parameters that is 1.2 percentage-points more accurate on ImageNet-1k, and also pro- ducing an uncompressed set of SqueezeNet parameters that is 4.3 percentage-points more accurate, compared to our results in Table 2. We mentioned near the beginning of this paper that small models are more amenable to on-chip implementations on FPGAs. Since we released the SqueezeNet model, Gschwend has developed a variant of SqueezeNet and implemented it on an FPGA (Gschwend, 2016). As we anticipated, Gschwend was able to able to store the parameters of a SqueezeNet-like model entirely within the FPGA and eliminate the need for off-chip memory accesses to load model parameters. In the context of this paper, we focused on ImageNet as a target dataset. | 1602.07360#31 | 1602.07360#33 | 1602.07360 | [
"1512.00567"
] |
1602.07360#33 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | However, it has become common practice to apply ImageNet-trained CNN representations to a variety of applications such as ï¬ ne-grained object recognition (Zhang et al., 2013; Donahue et al., 2013), logo identiï¬ cation in images (Iandola et al., 2015), and generating sentences about images (Fang et al., 2015). ImageNet- trained CNNs have also been applied to a number of applications pertaining to autonomous driv- ing, including pedestrian and vehicle detection in images (Iandola et al., 2014; Girshick et al., 2015; Ashraf et al., 2016) and videos (Chen et al., 2015b), as well as segmenting the shape of the road (Badrinarayanan et al., 2015). We think SqueezeNet will be a good candidate CNN architecture for a variety of applications, especially those in which small model size is of importance. SqueezeNet is one of several new CNNs that we have discovered while broadly exploring the de- sign space of CNN architectures. We hope that SqueezeNet will inspire the reader to consider and explore the broad range of possibilities in the design space of CNN architectures and to perform that exploration in a more systematic manner. | 1602.07360#32 | 1602.07360#34 | 1602.07360 | [
"1512.00567"
] |
1602.07360#34 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | # REFERENCES Khalid Ashraf, Bichen Wu, Forrest N. Iandola, Matthew W. Moskewicz, and Kurt Keutzer. Shallow networks for high-accuracy road object-detection. arXiv:1606.01561, 2016. Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. SegNet: A deep convolutional encoder- decoder architecture for image segmentation. arxiv:1511.00561, 2015. | 1602.07360#33 | 1602.07360#35 | 1602.07360 | [
"1512.00567"
] |
1602.07360#35 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Eddie Bell. A implementation of squeezenet in chainer. https://github.com/ejlb/ squeezenet-chainer, 2016. J. Bergstra and Y. Bengio. An optimization methodology for neural network weights and architec- tures. JMLR, 2012. Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. Mxnet: | 1602.07360#34 | 1602.07360#36 | 1602.07360 | [
"1512.00567"
] |
1602.07360#36 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | A ï¬ exible and efï¬ cient machine learning library for heterogeneous distributed systems. arXiv:1512.01274, 2015a. 10 # Under review as a conference paper at ICLR 2017 Xiaozhi Chen, Kaustav Kundu, Yukun Zhu, Andrew G Berneshawi, Huimin Ma, Sanja Fidler, and Raquel Urtasun. 3d object proposals for accurate object class detection. In NIPS, 2015b. Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catan- zaro, and Evan Shelhamer. cuDNN: efï¬ cient primitives for deep learning. arXiv:1410.0759, 2014. Francois Chollet. Keras: Deep learning library for theano and tensorï¬ ow. https://keras.io, 2016. Ronan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: | 1602.07360#35 | 1602.07360#37 | 1602.07360 | [
"1512.00567"
] |
1602.07360#37 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | A matlab-like environment for machine learning. In NIPS BigLearn Workshop, 2011. Consumer Reports. Teslas new needs Better http://www.consumerreports.org/tesla/ autopilot: but still improvement. tesla-new-autopilot-better-but-needs-improvement, 2016. Dipankar Das, Sasikanth Avancha, Dheevatsa Mudigere, Karthikeyan Vaidyanathan, Srinivas Srid- haran, Dhiraj D. Kalamkar, Bharat Kaul, and Pradeep Dubey. | 1602.07360#36 | 1602.07360#38 | 1602.07360 | [
"1512.00567"
] |
1602.07360#38 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Distributed deep learning using synchronous stochastic gradient descent. arXiv:1602.06709, 2016. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, 2009. E.L Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus. Exploiting linear structure within convolutional networks for efï¬ cient evaluation. In NIPS, 2014. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. | 1602.07360#37 | 1602.07360#39 | 1602.07360 | [
"1512.00567"
] |
1602.07360#39 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Decaf: A deep convolutional activation feature for generic visual recognition. arXiv:1310.1531, 2013. DT42. Squeezenet keras implementation. https://github.com/DT42/squeezenet_ demo, 2016. Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, and Geoffrey Zweig. | 1602.07360#38 | 1602.07360#40 | 1602.07360 | [
"1512.00567"
] |
1602.07360#40 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | From captions to visual concepts and back. In CVPR, 2015. Ross B. Girshick, Forrest N. Iandola, Trevor Darrell, and Jitendra Malik. Deformable part models are convolutional neural networks. In CVPR, 2015. David Gschwend. Zynqnet: An fpga-accelerated embedded convolutional neural network. Masterâ s thesis, Swiss Federal Institute of Technology Zurich (ETH-Zurich), 2016. Philipp Gysel. | 1602.07360#39 | 1602.07360#41 | 1602.07360 | [
"1512.00567"
] |
1602.07360#41 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Ristretto: Hardware-oriented approximation of convolutional neural networks. arXiv:1605.06402, 2016. S. Han, H. Mao, and W. Dally. Deep compression: Compressing DNNs with pruning, trained quantization and huffman coding. arxiv:1510.00149v3, 2015a. S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efï¬ cient neural networks. In NIPS, 2015b. Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William J Dally. | 1602.07360#40 | 1602.07360#42 | 1602.07360 | [
"1512.00567"
] |
1602.07360#42 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Eie: Efï¬ cient inference engine on compressed deep neural network. International Sympo- sium on Computer Architecture (ISCA), 2016a. Song Han, Jeff Pool, Sharan Narang, Huizi Mao, Shijian Tang, Erich Elsen, Bryan Catanzaro, John Tran, and William J. Dally. Dsd: Regularizing deep neural networks with dense-sparse-dense training ï¬ ow. arXiv:1607.04381, 2016b. Guo Haria. convert squeezenet to mxnet. https://github.com/haria/SqueezeNet/ commit/0cf57539375fd5429275af36fc94c774503427c3, 2016. | 1602.07360#41 | 1602.07360#43 | 1602.07360 | [
"1512.00567"
] |
1602.07360#43 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | 11 # Under review as a conference paper at ICLR 2017 K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectiï¬ ers: Surpassing human-level perfor- mance on imagenet classiï¬ cation. In ICCV, 2015a. Kaiming He and Jian Sun. Convolutional neural networks at constrained time cost. In CVPR, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. arXiv:1512.03385, 2015b. | 1602.07360#42 | 1602.07360#44 | 1602.07360 | [
"1512.00567"
] |
1602.07360#44 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Forrest N. Iandola, Matthew W. Moskewicz, Sergey Karayev, Ross B. Girshick, Trevor Darrell, and Kurt Keutzer. Densenet: Implementing efï¬ cient convnet descriptor pyramids. arXiv:1404.1869, 2014. Forrest N. Iandola, Anting Shen, Peter Gao, and Kurt Keutzer. DeepLogo: Hitting logo recognition with the deep neural network hammer. arXiv:1510.02131, 2015. | 1602.07360#43 | 1602.07360#45 | 1602.07360 | [
"1512.00567"
] |
1602.07360#45 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Forrest N. Iandola, Khalid Ashraf, Matthew W. Moskewicz, and Kurt Keutzer. FireCaffe: near-linear acceleration of deep neural network training on compute clusters. In CVPR, 2016. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. JMLR, 2015. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser- gio Guadarrama, and Trevor Darrell. | 1602.07360#44 | 1602.07360#46 | 1602.07360 | [
"1512.00567"
] |
1602.07360#46 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Caffe: Convolutional architecture for fast feature embed- ding. arXiv:1408.5093, 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet Classiï¬ cation with Deep Con- volutional Neural Networks. In NIPS, 2012. Y. LeCun, B.Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, and L.D. Jackel. | 1602.07360#45 | 1602.07360#47 | 1602.07360 | [
"1512.00567"
] |
1602.07360#47 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Back- propagation applied to handwritten zip code recognition. Neural Computation, 1989. Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv:1312.4400, 2013. T.B. Ludermir, A. Yamazaki, and C. Zanchettin. An optimization methodology for neural network weights and architectures. IEEE Trans. Neural Networks, 2006. Dmytro Mishkin, Nikolay Sergievskiy, and Jiri Matas. Systematic evaluation of cnn advances on the imagenet. arXiv:1606.02228, 2016. Vinod Nair and Geoffrey E. Hinton. Rectiï¬ | 1602.07360#46 | 1602.07360#48 | 1602.07360 | [
"1512.00567"
] |
1602.07360#48 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | ed linear units improve restricted boltzmann machines. In ICML, 2010. Jiantao Qiu, Jie Wang, Song Yao, Kaiyuan Guo, Boxun Li, Erjin Zhou, Jincheng Yu, Tianqi Tang, Ningyi Xu, Sen Song, Yu Wang, and Huazhong Yang. Going deeper with embedded fpga platform for convolutional neural network. In ACM International Symposium on FPGA, 2016. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014. | 1602.07360#47 | 1602.07360#49 | 1602.07360 | [
"1512.00567"
] |
1602.07360#49 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | J. Snoek, H. Larochelle, and R.P. Adams. Practical bayesian optimization of machine learning algorithms. In NIPS, 2012. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overï¬ tting. JMLR, 2014. R. K. Srivastava, K. Greff, and J. Schmidhuber. Highway networks. In ICML Deep Learning Workshop, 2015. | 1602.07360#48 | 1602.07360#50 | 1602.07360 | [
"1512.00567"
] |
1602.07360#50 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | K.O. Stanley and R. Miikkulainen. Evolving neural networks through augmenting topologies. Neu- rocomputing, 2002. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv:1409.4842, 2014. | 1602.07360#49 | 1602.07360#51 | 1602.07360 | [
"1512.00567"
] |
1602.07360#51 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | 12 # Under review as a conference paper at ICLR 2017 Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re- thinking the inception architecture for computer vision. arXiv:1512.00567, 2015. Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv:1602.07261, 2016. | 1602.07360#50 | 1602.07360#52 | 1602.07360 | [
"1512.00567"
] |
1602.07360#52 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | S. Tokui, K. Oono, S. Hido, and J. Clayton. Chainer: a next-generation open source framework for deep learning. In NIPS Workshop on Machine Learning Systems (LearningSys), 2015. Sagar M Waghmare. FireModule.lua. https://github.com/Element-Research/dpnn/ blob/master/FireModule.lua, 2016. Ning Zhang, Ryan Farrell, Forrest Iandola, and Trevor Darrell. | 1602.07360#51 | 1602.07360#53 | 1602.07360 | [
"1512.00567"
] |
1602.07360#53 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Deformable part descriptors for ï¬ ne-grained recognition and attribute prediction. In ICCV, 2013. 13 | 1602.07360#52 | 1602.07360 | [
"1512.00567"
] |
|
1602.07261#0 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | 6 1 0 2 g u A 3 2 ] V C . s c [ 2 v 1 6 2 7 0 . 2 0 6 1 : v i X r a # Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning # Christian Szegedy Google Inc. 1600 Amphitheatre Pkwy, Mountain View, CA [email protected] # Vincent Vanhoucke [email protected] Sergey Ioffe [email protected] # Alex Alemi [email protected] | 1602.07261#1 | 1602.07261 | [
"1512.00567"
] |
|
1602.07261#1 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | # Abstract Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at rel- atively low computational cost. Recently, the introduction of residual connections in conjunction with a more tradi- tional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any beneï¬ t in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks signiï¬ | 1602.07261#0 | 1602.07261#2 | 1602.07261 | [
"1512.00567"
] |
1602.07261#2 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | cantly. There is also some evidence of residual Incep- tion networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These varia- tions improve the single-frame recognition performance on the ILSVRC 2012 classiï¬ cation task signiï¬ cantly. We fur- ther demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08% top-5 error on the test set of the ImageNet classiï¬ | 1602.07261#1 | 1602.07261#3 | 1602.07261 | [
"1512.00567"
] |
1602.07261#3 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | cation (CLS) challenge. tion [7], object tracking [18], and superresolution [3]. These examples are but a few of all the applications to which deep convolutional networks have been very successfully applied ever since. In this work we study the combination of the two most recent ideas: Residual connections introduced by He et al. in [5] and the latest revised version of the Inception archi- tecture [15]. In [5], it is argued that residual connections are of inherent importance for training very deep architectures. Since Inception networks tend to be very deep, it is natu- ral to replace the ï¬ lter concatenation stage of the Inception architecture with residual connections. This would allow Inception to reap all the beneï¬ ts of the residual approach while retaining its computational efï¬ ciency. Besides a straightforward integration, we have also stud- ied whether Inception itself can be made more efï¬ cient by making it deeper and wider. For that purpose, we designed a new version named Inception-v4 which has a more uni- form simpliï¬ ed architecture and more inception modules than Inception-v3. Historically, Inception-v3 had inherited a lot of the baggage of the earlier incarnations. | 1602.07261#2 | 1602.07261#4 | 1602.07261 | [
"1512.00567"
] |
1602.07261#4 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | The techni- cal constraints chieï¬ y came from the need for partitioning the model for distributed training using DistBelief [2]. Now, after migrating our training setup to TensorFlow [1] these constraints have been lifted, which allowed us to simplify the architecture signiï¬ cantly. The details of that simpliï¬ ed architecture are described in Section 3. # 1. Introduction Since the 2012 ImageNet competition [11] winning en- try by Krizhevsky et al [8], their network â AlexNetâ has been successfully applied to a larger variety of computer vision tasks, for example to object-detection [4], segmen- tation [10], human pose estimation [17], video classiï¬ ca- In this report, we will compare the two pure Inception variants, Inception-v3 and v4, with similarly expensive hy- brid Inception-ResNet versions. Admittedly, those mod- els were picked in a somewhat ad hoc manner with the main constraint being that the parameters and computa- tional complexity of the models should be somewhat similar to the cost of the non-residual models. In fact we have tested bigger and wider Inception-ResNet variants and they per- formed very similarly on the ImageNet classiï¬ | 1602.07261#3 | 1602.07261#5 | 1602.07261 | [
"1512.00567"
] |
1602.07261#5 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | cation chal- 1 # lenge [11] dataset. The last experiment reported here is an evaluation of an ensemble of all the best performing models presented here. As it was apparent that both Inception-v4 and Inception- ResNet-v2 performed similarly well, exceeding state-of- the art single frame performance on the ImageNet valida- tion dataset, we wanted to see how a combination of those pushes the state of the art on this well studied dataset. Sur- prisingly, we found that gains on the single-frame perfor- mance do not translate into similarly large gains on ensem- bled performance. Nonetheless, it still allows us to report 3.1% top-5 error on the validation set with four models en- sembled setting a new state of the art, to our best knowl- edge. In the last section, we study some of the classiï¬ cation failures and conclude that the ensemble still has not reached the label noise of the annotations on this dataset and there is still room for improvement for the predictions. # 2. Related Work | 1602.07261#4 | 1602.07261#6 | 1602.07261 | [
"1512.00567"
] |
1602.07261#6 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Convolutional networks have become popular in large scale image recognition tasks after Krizhevsky et al. [8]. Some of the next important milestones were Network-in- network [9] by Lin et al., VGGNet [12] by Simonyan et al. and GoogLeNet (Inception-v1) [14] by Szegedy et al. Residual connection were introduced by He et al. in [5] in which they give convincing theoretical and practical ev- idence for the advantages of utilizing additive merging of signals both for image recognition, and especially for object detection. The authors argue that residual connections are inherently necessary for training very deep convolutional models. | 1602.07261#5 | 1602.07261#7 | 1602.07261 | [
"1512.00567"
] |
1602.07261#7 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Our ï¬ ndings do not seem to support this view, at least for image recognition. However it might require more measurement points with deeper architectures to understand the true extent of beneï¬ cial aspects offered by residual con- nections. In the experimental section we demonstrate that it is not very difï¬ cult to train competitive very deep net- works without utilizing residual connections. However the use of residual connections seems to improve the training speed greatly, which is alone a great argument for their use. The Inception deep convolutional architecture was intro- duced in [14] and was called GoogLeNet or Inception-v1 in our exposition. Later the Inception architecture was reï¬ ned in various ways, ï¬ rst by the introduction of batch normaliza- tion [6] (Inception-v2) by Ioffe et al. Later the architecture was improved by additional factorization ideas in the third iteration [15] which will be referred to as Inception-v3 in this report. Relu activation + Conv Conv Relu activation | 1602.07261#6 | 1602.07261#8 | 1602.07261 | [
"1512.00567"
] |
1602.07261#8 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Figure 1. Residual connections as introduced in He et al. [5]. Relu activation + Conv i 1x1 Conv Relu activation Figure 2. Optimized version of ResNet connections by [5] to shield computation. # 3. Architectural Choices # 3.1. Pure Inception blocks Our older Inception models used to be trained in a par- titioned manner, where each replica was partitioned into a multiple sub-networks in order to be able to ï¬ t the whole model in memory. However, the Inception architecture is highly tunable, meaning that there are a lot of possible changes to the number of ï¬ lters in the various layers that do not affect the quality of the fully trained network. In order to optimize the training speed, we used to tune the layer sizes carefully in order to balance the computation be- tween the various model sub-networks. In contrast, with the introduction of TensorFlow our most recent models can be trained without partitioning the replicas. This is enabled in part by recent optimizations of memory used by backprop- agation, achieved by carefully considering what tensors are needed for gradient computation and structuring the compu- tation to reduce the number of such tensors. Historically, we have been relatively conservative about changing the archi- tectural choices and restricted our experiments to varying isolated network components while keeping the rest of the network stable. Not simplifying earlier choices resulted in networks that looked more complicated that they needed to be. In our newer experiments, for Inception-v4 we decided to shed this unnecessary baggage and made uniform choices for the Inception blocks for each grid size. Plase refer to Figure 9 for the large scale structure of the Inception-v4 net- work and Figures 3, 4, 5, 6, 7 and 8 for the detailed struc- ture of its components. All the convolutions not marked with â Vâ in the ï¬ | 1602.07261#7 | 1602.07261#9 | 1602.07261 | [
"1512.00567"
] |
1602.07261#9 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | gures are same-padded meaning that their output grid matches the size of their input. Convolutions marked with â Vâ are valid padded, meaning that input patch of each unit is fully contained in the previous layer and the grid size of the output activation map is reduced accord- ingly. # 3.2. Residual Inception Blocks For the residual versions of the Inception networks, we use cheaper Inception blocks than the original Inception. Each Inception block is followed by ï¬ lter-expansion layer (1 à 1 convolution without activation) which is used for scaling up the dimensionality of the ï¬ lter bank before the addition to match the depth of the input. This is needed to compensate for the dimensionality reduction induced by the Inception block. We tried several versions of the residual version of In- ception. | 1602.07261#8 | 1602.07261#10 | 1602.07261 | [
"1512.00567"
] |
1602.07261#10 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Only two of them are detailed here. The ï¬ rst one â Inception-ResNet-v1â roughly the computational cost of Inception-v3, while â Inception-ResNet-v2â matches the raw cost of the newly introduced Inception-v4 network. See Figure 15 for the large scale structure of both varianets. (However, the step time of Inception-v4 proved to be signif- icantly slower in practice, probably due to the larger number of layers.) Another small technical difference between our resid- ual and non-residual Inception variants is that in the case of Inception-ResNet, we used batch-normalization only on top of the traditional layers, but not on top of the summa- tions. It is reasonable to expect that a thorough use of batch- normalization should be advantageous, but we wanted to keep each model replica trainable on a single GPU. It turned out that the memory footprint of layers with large activa- tion size was consuming disproportionate amount of GPU- memory. By omitting the batch-normalization on top of those layers, we were able to increase the overall number of Inception blocks substantially. We hope that with bet- ter utilization of computing resources, making this trade-off will become unecessary. Filter concat | ssx35xsea oO 3x3 Conv MaxPool (192 V) (stride=2 V) â | 1602.07261#9 | 1602.07261#11 | 1602.07261 | [
"1512.00567"
] |
1602.07261#11 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | â Filter concat ee 3x3 Conv (96 V) t 1x7 Conv 3x3 Conv (64) (96 V) t { 7x1 Conv 64 1x1 Conv ( + ) (64) TAx71x192 1x1 Conv (64) Filter concat â 7sx73x160 eee 3x3 MaxPool 3x3 Conv (stride 2 V) (96 stride 2 V) â â 3x3 Conv (64) f 3x3 Conv (32 V) i 3x3 Conv (32 stride 2 V) a 1A7x147x64 147x147%32 149x149x32 Input (299x299x3) 299x299x3 Figure 3. | 1602.07261#10 | 1602.07261#12 | 1602.07261 | [
"1512.00567"
] |
1602.07261#12 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | The schema for stem of the pure Inception-v4 and Inception-ResNet-v2 networks. This is the input part of those net- works. Cf. Figures 9 and 15 Filter concat 3x3 Conv (96) | a 4x1 Conv 3x3 Conv 3x3 Conv (98) (96) (96) i â Avg Poolin: tka Com 1x1 Conv 4x1 Conv ° ° (96) (64) (64) + Filter concat Figure 4. The schema for 35 à 35 grid modules of the pure Inception-v4 network. This is the Inception-A block of Figure 9. Filter concat 7x1 Conv (256) t 1x7 Conv 4x1 Conv prconv (224) (128) (256) ' 7x1 Conv 1x1 Conv 1x7 Conv (224) (384) (224) t 1x7 Conv 1x1 Conv (192) Avg Pooling 2) t 1x1 Conv (192) Filter concat Figure 5. The schema for 17 à 17 grid modules of the pure Inception-v4 network. This is the Inception-B block of Figure 9. Filter concat 3x1 Conv (256) 4x3 Conv 4x1 Conv (256) (256) 1x3 Conv 3x1 Conv (256) (256) 3x1 Conv 4x1 Conv Ge) (256) il | 4x3 Conv (448) t 1x1 Conv (384) (384) | 1x1 Conv Avg Pooling | Filter concat Figure 6. The schema for 8à 8 grid modules of the pure Inception- v4 network. This is the Inception-C block of Figure 9. Filter concat 3x3 Conv (m stride 2 V) ry 3x3 MaxPool (stride 2 V) 3x3 Conv 3x3 Conv (n stride 2 V) (I) ry 1x1 Conv (k) Filter concat Figure 7. The schema for 35 à 35 to 17 à 17 reduction module. Different variants of this blocks (with various number of ï¬ | 1602.07261#11 | 1602.07261#13 | 1602.07261 | [
"1512.00567"
] |
1602.07261#13 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | lters) are used in Figure 9, and 15 in each of the new Inception(-v4, - ResNet-v1, -ResNet-v2) variants presented in this paper. The k, l, m, n numbers represent ï¬ lter bank sizes which can be looked up in Table 1. Filter concat 3x3 Conv 20 stride 2 V) 3x3 Conv (820 stride 2V) (192 stride 2 V) t i 7x1 Conv 3x3 MaxPool (320) (stride 2 V) t 1x7 Conv 1x1 Conv (256) (192) 1 cy 1x1 Conv (256) Filter concat Figure 8. The schema for 17 à 17 to 8 à 8 grid-reduction mod- ule. This is the reduction module used by the pure Inception-v4 network in Figure 9. Softmax Output: 1000 Dropout (keep 0.8) Avarage Pooling Output: 1536 Output: 1536 3 x Inception-C Reduction-B Output: 8x8x1536 Output: 8x8x1536, 7 x Inception-B Output: 1717x1024 Reduction-A Output: 1717x1024 4x Inception-A Stem Input (299x299x3) Output: 35x35x384 Output: 35x35x384 299x299%3 Figure 9. The overall schema of the Inception-v4 network. For the detailed modules, please refer to Figures 3, 4, 5, 6, 7 and 8 for the detailed structure of the various components. Relu activation 1x1 Conv (256 Linear) at 3x3 Conv (32) 1x1 Conv + (32) 3x3 Conv 3x3 Conv (32) (32) 1x1 Conv 1x1 Conv (32) (32) Relu activation Figure 10. The schema for 35 à 35 grid (Inception-ResNet-A) module of Inception-ResNet-v1 network. # Relu | 1602.07261#12 | 1602.07261#14 | 1602.07261 | [
"1512.00567"
] |
1602.07261#14 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | activation 1x1 Conv (896 Linear) 7x1 Conv (128) 1x1 Conv 1x7 Conv (128) (128) 1x1 Conv (128) activation # Relu Figure 11. The schema for 17 Ã 17 grid (Inception-ResNet-B) module of Inception-ResNet-v1 network. Filter concat ZN Conv (256 stride 2 V) 3x3 Conv 3x3 Conv t (384 stride 2V) | (256 stride 2 V) 3x3 MaxPool 3x3 Conv (stride 2 V) i (256) NN 1x1 Conv 1x1 Conv t (9) (256) 1x1 Conv (256) Previous Layer | 1602.07261#13 | 1602.07261#15 | 1602.07261 | [
"1512.00567"
] |
1602.07261#15 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Figure 12. â Reduction-Bâ 17 Ã 17 to 8 Ã 8 grid-reduction module. This module used by the smaller Inception-ResNet-v1 network in Figure 15. # Relu activation activation 1x1 Conv (1792 Linear) 3x1 Conv (192) 1x1 Conv 1x3 Conv (192) (192) ry 1x1 Conv (192) # Relu Figure 13. The schema for 8Ã 8 grid (Inception-ResNet-C) module of Inception-ResNet-v1 network. 3x3 Conv (256 stride 2 V) + 3x3 Conv (192 V) 3x3 MaxPool (stride 2 V) ry 3x3 Conv (64) ry 3x3 Conv (32 V) ry 3x3 Conv (32 stride 2 V) Input (299x299x3) 35x35x256 71x71x192 73x73x80 73x73x64 147x147x64 147x147x32 149x149x32 299x299x3 Figure 14. The stem of the Inception-ResNet-v1 network. Softmax Dropout (keep 0.8) Average Pooling 5 x Inception-resnet-C Reduction-B ' 40x Inception-resnet-B Reduction-A 5 x Inception-resnet-A Stem Input (299x299x3) Output 1000 Output 1792 Output: 1792 Output: ex8x1702 (Output Bxax1792 Output 173174898 Output 173174898 Output: 351351256 Output: 351351256 2091200 | 1602.07261#14 | 1602.07261#16 | 1602.07261 | [
"1512.00567"
] |
1602.07261#16 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Figure 15. Schema for Inception-ResNet-v1 and Inception- ResNet-v2 networks. This schema applies to both networks but the underlying components differ. Inception-ResNet-v1 uses the blocks as described in Figures 14, 10, 7, 11, 12 and 13. Inception- ResNet-v2 uses the blocks as described in Figures 3, 16, 7,17, 18 and 19. The output sizes in the diagram refer to the activation vector tensor shapes of Inception-ResNet-v1. Relu activation 1x1 Conv (384 Linear) aa 1x1 Conv (32) Relu activation 3x3 Conv (64) 3x3 Conv 3x3 Conv (32) (48) 1x1 Conv 1x1 Conv (32) (32) Figure 16. The schema for 35 Ã 35 grid (Inception-ResNet-A) module of the Inception-ResNet-v2 network. activation 1x1 Conv (1154 Linear) 7x1 Conv (192) + 1x1 Conv (192) 1x7 Conv Relu activation (160) 1x1 Conv (128) # Relu Figure 17. The schema for 17 Ã 17 grid (Inception-ResNet-B) module of the Inception-ResNet-v2 network. Filter concat dN Conv (320 stride 2 V) 3x3 Conv 3x3 Conv (384 stride 2V) | (288 stride 2 V) 3x3 MaxPool 3x3 Conv (stride 2 V) I | (288) 1x1 Conv 1x1 Conv ii (53) (256) 1x1 Conv (256) Previous Layer Figure 18. The schema for 17 Ã 17 to 8 Ã 8 grid-reduction mod- ule. Reduction-B module used by the wider Inception-ResNet-v1 network in Figure 15. # Relu activation activation 1x1 Conv (2048 Linear) 3x1 Conv (256) 1x1 Conv 1x3 Conv (192) (224) 1x1 Conv (192) # Relu Figure 19. The schema for 8Ã | 1602.07261#15 | 1602.07261#17 | 1602.07261 | [
"1512.00567"
] |
1602.07261#17 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | 8 grid (Inception-ResNet-C) module of the Inception-ResNet-v2 network. Network Inception-v4 Inception-ResNet-v1 Inception-ResNet-v2 k 192 192 256 l 224 192 256 m 256 256 384 n 384 384 384 Table 1. The number of ï¬ lters of the Reduction-A module for the three Inception variants presented in this paper. The four numbers in the colums of the paper parametrize the four convolutions of Figure 7 Relu activation + Activation Scaling f Inception Relu activation Figure 20. The general schema for scaling combined Inception- resnet moduels. We expect that the same idea is useful in the gen- eral resnet case, where instead of the Inception block an arbitrary subnetwork is used. The scaling block just scales the last linear activations by a suitable constant, typically around 0.1. | 1602.07261#16 | 1602.07261#18 | 1602.07261 | [
"1512.00567"
] |
1602.07261#18 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | # 3.3. Scaling of the Residuals Also we found that if the number of ï¬ lters exceeded 1000, the residual variants started to exhibit instabilities and the network has just â diedâ early in the training, meaning that the last layer before the average pooling started to pro- duce only zeros after a few tens of thousands of iterations. This could not be prevented, neither by lowering the learn- ing rate, nor by adding an extra batch-normalization to this layer. We found that scaling down the residuals before adding them to the previous layer activation seemed to stabilize the training. In general we picked some scaling factors between 0.1 and 0.3 to scale the residuals before their being added to the accumulated layer activations (cf. | 1602.07261#17 | 1602.07261#19 | 1602.07261 | [
"1512.00567"
] |
1602.07261#19 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Figure 20). A similar instability was observed by He et al. in [5] in the case of very deep residual networks and they suggested a two-phase training where the ï¬ rst â warm-upâ phase is done with very low learning rate, followed by a second phase with high learning rata. We found that if the number of ï¬ lters is very high, then even a very low (0.00001) learning rate is not sufï¬ cient to cope with the instabilities and the training with high learning rate had a chance to destroy its effects. We found it much more reliable to just scale the residuals. Even where the scaling was not strictly necessary, it never seemed to harm the ï¬ nal accuracy, but it helped to stabilize the training. | 1602.07261#18 | 1602.07261#20 | 1602.07261 | [
"1512.00567"
] |
1602.07261#20 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | # 4. Training Methodology We have trained our networks with stochastic gradient utilizing the TensorFlow [1] distributed machine learning system using 20 replicas running each on a NVidia Kepler GPU. Our earlier experiments used momentum [13] with a decay of 0.9, while our best models were achieved using at <== inception-v3 a â _inception-resnet-v1 â bo 0 60 80 100 100 a0 160 180 200 Epoch Figure 21. | 1602.07261#19 | 1602.07261#21 | 1602.07261 | [
"1512.00567"
] |
1602.07261#21 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Top-1 error evolution during training of pure Inception- v3 vs a residual network of similar computational cost. The eval- uation is measured on a single crop on the non-blacklist images of the ILSVRC-2012 validation set. The residual model was train- ing much faster, but reached slightly worse ï¬ nal accuracy than the traditional Inception-v3. RMSProp with decay of 0.9 and â ¬ = 1.0. We used a learning rate of 0.045, decayed every two epochs using an exponential rate of 0.94. Model evaluations are performed using a running average of the parameters computed over time. | 1602.07261#20 | 1602.07261#22 | 1602.07261 | [
"1512.00567"
] |
1602.07261#22 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | # 5. Experimental Results First we observe the top-1 and top-5 validation-error evo- lution of the four variants during training. After the exper- iment was conducted, we have found that our continuous evaluation was conducted on a subset of the validation set which omitted about 1700 blacklisted entities due to poor bounding boxes. It turned out that the omission should have been only performed for the CLSLOC benchmark, but yields somewhat incomparable (more optimistic) numbers when compared to other reports including some earlier re- ports by our team. The difference is about 0.3% for top-1 error and about 0.15% for the top-5 error. However, since the differences are consistent, we think the comparison be- tween the curves is a fair one. On the other hand, we have rerun our multi-crop and en- semble results on the complete validation set consisting of 50000 images. | 1602.07261#21 | 1602.07261#23 | 1602.07261 | [
"1512.00567"
] |
1602.07261#23 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Also the ï¬ nal ensemble result was also per- formed on the test set and sent to the ILSVRC test server for validation to verify that our tuning did not result in an over-ï¬ tting. We would like to stress that this ï¬ nal validation was done only once and we have submitted our results only twice in the last year: once for the BN-Inception paper and later during the ILSVR-2015 CLSLOC competition, so we believe that the test set numbers constitute a true estimate of the generalization capabilities of our model. Finally, we present some comparisons, between various versions of Inception and Inception-ResNet. The models Inception-v3 and Inception-v4 are deep convolutional net- 49] === inoeption-v3 3a â _inception-resnetw1 ag " 5 â i 5 H . , 1 17 a0 30 cy 700 a0 140 160 0200 Epoch Figure 22. | 1602.07261#22 | 1602.07261#24 | 1602.07261 | [
"1512.00567"
] |
1602.07261#24 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Top-5 error evolution during training of pure Inception- v3 vs a residual Inception of similar computational cost. The eval- uation is measured on a single crop on the non-blacklist images of the ILSVRC-2012 validation set. The residual version has trained much faster and reached slightly better ï¬ nal recall on the valida- tion set. ba === inception-v4 bo 40 @ 80 100 10 cc) Teo Epoch Figure 23. Top-1 error evolution during training of pure Inception- v3 vs a residual Inception of similar computational cost. The eval- uation is measured on a single crop on the non-blacklist images of the ILSVRC-2012 validation set. The residual version was train- ing much faster and reached slightly better ï¬ nal accuracy than the traditional Inception-v4. Network BN-Inception [6] Inception-v3 [15] Inception-ResNet-v1 Inception-v4 Inception-ResNet-v2 Top-1 Error Top-5 Error 25.2% 21.2% 21.3% 20.0% 19.9% 7.8% 5.6% 5.5% 5.0% 4.9% | 1602.07261#23 | 1602.07261#25 | 1602.07261 | [
"1512.00567"
] |
1602.07261#25 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Table 2. Single crop - single model experimental results. Reported on the non-blacklisted subset of the validation set of ILSVRC 2012. works not utilizing residual connections while Inception- ResNet-v1 and Inception-ResNet-v2 are Inception style net- works that utilize residual connections instead of ï¬ lter con- catenation. Table 2 shows the single-model, single crop top-1 and top-5 error of the various architectures on the validation set. Error (09-5) % 4 <== inception-v4 â _ inception-resnet-v2 40 6 0 80 100 10 a0 Â¥60 pach Figure 24. Top-5 error evolution during training of pure Inception- v4 vs a residual Inception of similar computational cost. The eval- uation is measured on a single crop on the non-blacklist images of the ILSVRC-2012 validation set. The residual version trained faster and reached slightly better ï¬ nal recall on the validation set. 45 inception-v4 4. inception-resnet-v2 a8 inception 30) inception-resnet-v1 280 7 0 a 700 i ao 700 Epoch Figure 25. Top-5 error evolution of all four models (single model, single crop). Showing the improvement due to larger model size. Although the residual version converges faster, the ï¬ nal accuracy seems to mainly depend on the model size. inception inception-resnet-v1 Ey co 30 cy 700 or) a0 70 Epoch | 1602.07261#24 | 1602.07261#26 | 1602.07261 | [
"1512.00567"
] |
1602.07261#26 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Figure 26. Top-1 error evolution of all four models (single model, single crop). This paints a similar picture as the top-5 evaluation. Table 3 shows the performance of the various models with a small number of crops: 10 crops for ResNet as was reported in [5]), for the Inception variants, we have used the 12 crops evaluation as as described in [14]. Network ResNet-151 [5] Inception-v3 [15] Inception-ResNet-v1 Inception-v4 Inception-ResNet-v2 Crops Top-1 Error Top-5 Error 10 12 12 12 12 21.4% 19.8% 19.8% 18.7% 18.7% 5.7% 4.6% 4.6% 4.2% 4.1% Table 3. 10/12 crops evaluations - single model experimental re- sults. Reported on the all 50000 images of the validation set of ILSVRC 2012. Network ResNet-151 [5] Inception-v3 [15] Inception-ResNet-v1 Inception-v4 Inception-ResNet-v2 Crops Top-1 Error Top-5 Error dense 144 144 144 144 19.4% 18.9% 18.8% 17.7% 17.8% 4.5% 4.3% 4.3% 3.8% 3.7% Table 4. 144 crops evaluations - single model experimental results. Reported on the all 50000 images of the validation set of ILSVRC 2012. Network ResNet-151 [5] Inception-v3 [15] 6 4 â 17.3% 3.6% 3.6% Inception-v4 + 3Ã Inception-ResNet-v2 4 16.5% 3.1% # Models Top-1 Error Top-5 Error Table 5. Ensemble results with 144 crops/dense evaluation. Re- ported on the all 50000 images of the validation set of ILSVRC 2012. | 1602.07261#25 | 1602.07261#27 | 1602.07261 | [
"1512.00567"
] |
1602.07261#27 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | For Inception-v4(+Residual), the ensemble consists of one pure Inception-v4 and three Inception-ResNet-v2 models and were evaluated both on the validation and on the test-set. The test-set performance was 3.08% top-5 error verifying that we donâ t over- ï¬ t on the validation set. Table 4 shows the single model performance of the var- ious models using. For residual network the dense evalua- tion result is reported from [5]. For the inception networks, the 144 crops strategy was used as described in [14]. Table 5 compares ensemble results. For the pure resid- ual network the 6 models dense evaluation result is reported from [5]. For the inception networks 4 models were ensem- bled using the 144 crops strategy as described in [14]. # 6. Conclusions We have presented three new network architectures in detail: | 1602.07261#26 | 1602.07261#28 | 1602.07261 | [
"1512.00567"
] |
1602.07261#28 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | â ¢ Inception-ResNet-v1: a hybrid Inception version that to Inception-v3 has a similar computational cost from [15]. â ¢ Inception-ResNet-v2: a costlier hybrid Inception ver- sion with signiï¬ cantly improved recognition perfor- mance. â ¢ Inception-v4: a pure Inception variant without residual connections with roughly the same recognition perfor- mance as Inception-ResNet-v2. We studied how the introduction of residual connections leads to dramatically improved training speed for the Incep- tion architecture. Also our latest models (with and without residual connections) outperform all our previous networks, just by virtue of the increased model size. | 1602.07261#27 | 1602.07261#29 | 1602.07261 | [
"1512.00567"
] |
1602.07261#29 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | # References [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghe- mawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man´e, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi´egas, O. Vinyals, P. | 1602.07261#28 | 1602.07261#30 | 1602.07261 | [
"1512.00567"
] |
1602.07261#30 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | War- den, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. Tensor- Flow: Large-scale machine learning on heterogeneous sys- tems, 2015. Software available from tensorï¬ ow.org. [2] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, et al. | 1602.07261#29 | 1602.07261#31 | 1602.07261 | [
"1512.00567"
] |
1602.07261#31 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Large scale dis- tributed deep networks. In Advances in Neural Information Processing Systems, pages 1223â 1231, 2012. [3] C. Dong, C. C. Loy, K. He, and X. Tang. Learning a deep convolutional network for image super-resolution. In Com- puter Visionâ ECCV 2014, pages 184â 199. Springer, 2014. [4] R. Girshick, J. Donahue, T. Darrell, and J. | 1602.07261#30 | 1602.07261#32 | 1602.07261 | [
"1512.00567"
] |
1602.07261#32 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Malik. Rich fea- ture hierarchies for accurate object detection and semantic In Proceedings of the IEEE Conference on segmentation. Computer Vision and Pattern Recognition (CVPR), 2014. [5] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. arXiv preprint arXiv:1512.03385, 2015. [6] S. Ioffe and C. Szegedy. | 1602.07261#31 | 1602.07261#33 | 1602.07261 | [
"1512.00567"
] |
1602.07261#33 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of The 32nd International Conference on Ma- chine Learning, pages 448â 456, 2015. [7] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classiï¬ cation with con- In Computer Vision and Pat- volutional neural networks. tern Recognition (CVPR), 2014 IEEE Conference on, pages 1725â | 1602.07261#32 | 1602.07261#34 | 1602.07261 | [
"1512.00567"
] |
1602.07261#34 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | 1732. IEEE, 2014. Imagenet classiï¬ cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â 1105, 2012. [9] M. Lin, Q. Chen, and S. Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013. [10] J. Long, E. Shelhamer, and T. Darrell. | 1602.07261#33 | 1602.07261#35 | 1602.07261 | [
"1512.00567"
] |
1602.07261#35 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 3431â 3440, 2015. [11] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. 2014. Imagenet large scale visual recognition challenge. [12] K. Simonyan and A. Zisserman. | 1602.07261#34 | 1602.07261#36 | 1602.07261 | [
"1512.00567"
] |
1602.07261#36 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [13] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Ma- chine Learning (ICML-13), volume 28, pages 1139â 1147. JMLR Workshop and Conference Proceedings, May 2013. [14] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. | 1602.07261#35 | 1602.07261#37 | 1602.07261 | [
"1512.00567"
] |
1602.07261#37 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â 9, 2015. [15] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015. [16] T. Tieleman and G. Hinton. | 1602.07261#36 | 1602.07261#38 | 1602.07261 | [
"1512.00567"
] |
1602.07261#38 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Divide the gradient by a run- ning average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. Accessed: 2015- 11-05. [17] A. Toshev and C. Szegedy. Deeppose: Human pose estima- tion via deep neural networks. In Computer Vision and Pat- tern Recognition (CVPR), 2014 IEEE Conference on, pages 1653â | 1602.07261#37 | 1602.07261#39 | 1602.07261 | [
"1512.00567"
] |
1602.07261#39 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | 1660. IEEE, 2014. [18] N. Wang and D.-Y. Yeung. Learning a deep compact image In Advances in Neural representation for visual tracking. Information Processing Systems, pages 809â 817, 2013. | 1602.07261#38 | 1602.07261 | [
"1512.00567"
] |
|
1602.02867#0 | Value Iteration Networks | 7 1 0 2 r a M 0 2 ] I A . s c [ 4 v 7 6 8 2 0 . 2 0 6 1 : v i X r a # Value Iteration Networks Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel Dept. of Electrical Engineering and Computer Sciences, UC Berkeley # Abstract We introduce the value iteration network (VIN): a fully differentiable neural net- work with a â planning moduleâ embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a con- volutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains. | 1602.02867#1 | 1602.02867 | [
"1602.02261"
] |
|
1602.02867#1 | Value Iteration Networks | # 1 Introduction Over the last decade, deep convolutional neural networks (CNNs) have revolutionized supervised learning for tasks such as object recognition, action recognition, and semantic segmentation [3, 15, 6, 19]. Recently, CNNs have been applied to reinforcement learning (RL) tasks with visual observations such as Atari games [21], robotic manipulation [18], and imitation learning (IL) [9]. In these tasks, a neural network (NN) is trained to represent a policy â a mapping from an observation of the systemâ s state to an action, with the goal of representing a control strategy that has good long-term behavior, typically quantiï¬ ed as the minimization of a sequence of time-dependent costs. The sequential nature of decision making in RL is inherently different than the one-step decisions in supervised learning, and in general requires some form of planning [2]. However, most recent deep RL works [21, 18, 9] employed NN architectures that are very similar to the standard networks used in supervised learning tasks, which typically consist of CNNs for feature extraction, and fully connected layers that map the features to a probability distribution over actions. Such networks are inherently reactive, and in particular, lack explicit planning computation. The success of reactive policies in sequential problems is due to the learning algorithm, which essentially trains a reactive policy to select actions that have good long-term consequences in its training domain. To understand why planning can nevertheless be an important ingredient in a policy, consider the grid-world navigation task depicted in Figure 1 (left), in which the agent can observe a map of its domain, and is required to navigate between some obstacles to a target position. One hopes that after training a policy to solve several instances of this problem with different obstacle conï¬ gurations, the policy would generalize to solve a different, unseen domain, as in Figure 1 (right). However, as we show in our experiments, while standard CNN-based networks can be easily trained to solve a set of such maps, they do not generalize well to new tasks outside this set, because they do not understand the goal-directed nature of the behavior. This observation suggests that the computation learned by reactive policies is different from planning, which is required to solve a new task1. 1In principle, with enough training data that covers all possible task conï¬ | 1602.02867#0 | 1602.02867#2 | 1602.02867 | [
"1602.02261"
] |
1602.02867#2 | Value Iteration Networks | gurations, and a rich enough policy representation, a reactive policy can learn to map each task to its optimal policy. In practice, this is often too expensive, and we offer a more data-efï¬ cient approach by exploiting a ï¬ exible prior about the planning computation underlying the behavior. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. In this work, we propose a NN-based policy that can effectively learn to plan. Our model, termed a value-iteration network (VIN), has a differen- tiable â planning programâ embedded within the NN structure. = . a] a al = . a] a al The key to our approach is an observation that the classic value-iteration (VI) planning algo- rithm [1, 2] may be represented by a speciï¬ c Figure 1: | 1602.02867#1 | 1602.02867#3 | 1602.02867 | [
"1602.02261"
] |
1602.02867#3 | Value Iteration Networks | Two instances of a grid-world domain. type of CNN. By embedding such a VI network Task is to move to the goal between the obstacles. module inside a standard feed-forward classiï¬ - cation network, we obtain a NN model that can learn the parameters of a planning computation that yields useful predictions. The VI block is differentiable, and the whole network can be trained using standard backpropagation. This makes our policy simple to train using standard RL and IL algorithms, and straightforward to integrate with NNs for perception and control. Connections between planning algorithms and recurrent NNs were previously explored by Ilin et al. [12]. Our work builds on related ideas, but results in a more broadly applicable policy representation. Our approach is different from model-based RL [25, 4], which requires system identiï¬ cation to map the observations to a dynamics model, which is then solved for a policy. In many applications, including robotic manipulation and locomotion, accurate system identiï¬ cation is difï¬ cult, and modelling errors can severely degrade the policy performance. In such domains, a model-free approach is often preferred [18]. Since a VIN is just a NN policy, it can be trained model free, without requiring explicit system identiï¬ | 1602.02867#2 | 1602.02867#4 | 1602.02867 | [
"1602.02261"
] |
1602.02867#4 | Value Iteration Networks | cation. In addition, the effects of modelling errors in VINs can be mitigated by training the network end-to-end, similarly to the methods in [13, 11]. We demonstrate the effectiveness of VINs within standard RL and IL algorithms in various problems, among which require visual perception, continuous control, and also natural language based decision making in the WebNav challenge [23]. After training, the policy learns to map an observation to a planning computation relevant for the task, and generate action predictions based on the resulting plan. As we demonstrate, this leads to policies that generalize better to new, unseen, task instances. 2 Background In this section we provide background on planning, value iteration, CNNs, and policy representations for RL and IL. In the sequel, we shall show that CNNs can implement a particular form of planning computation similar to the value iteration algorithm, which can then be used as a policy for RL or IL. Value Iteration: A standard model for sequential decision making and planning is the Markov decision process (MDP) (1) 2]. An MDP M consists of states s â ¬ S, actions a â ¬ A, a reward function R(s, a), and a transition kernel P(sâ |s, a) that encodes the probability of the next state given the current state and action. A policy 7(a|s) prescribes an action distribution for each state. The goal in an MDP is to find a policy that obtains high rewards in the long term. | 1602.02867#3 | 1602.02867#5 | 1602.02867 | [
"1602.02261"
] |
1602.02867#5 | Value Iteration Networks | Formally, the value Vâ ¢(s) of a state under policy 7 is the expected discounted sum of rewards when starting from that state and executing policy 7, V"(s) = E* [7/9 y'r(sz, a4)| 80 = 8], where 7 â ¬ (0, 1) is a discount factor, and Eâ denotes an expectation over trajectories of states and actions (so, a0, 81,41 -..), in which actions are selected according to 7, and states evolve according to the transition kernel P(sâ |s, a). The optimal value function V*(s) = max, V(s) is the maximal long-term return possible from a state. A policy 7* is said to be optimal if Vâ ¢" (s) = V*(s) Vs. A popular algorithm for calculating V* and â ¢* is value iteration (VI): Vr4i(s) = maxeQn(s,a) Vs, where Q,(s,a) = R(s,a) +730, Vr4i(s) = maxeQn(s,a) Vs, where Q,(s,a) = R(s,a) +730, P(s'|s,a)Vn(sâ ). (1) It is well known that the value function V,, in VI converges as n â oo to V*, from which an optimal policy may be derived as 7*(s) = arg max, Q.0(s, a). | 1602.02867#4 | 1602.02867#6 | 1602.02867 | [
"1602.02261"
] |
1602.02867#6 | Value Iteration Networks | Convolutional Neural Networks (CNNs) are NNs with a particular architecture that has proved useful for computer vision, among other domains {15}. A CNN is comprised of stacked convolution and max-pooling layers. The input to each convolution layer is a 3- dimensional signal X, typically, an image with | channels, m horizontal pixels, and n verti- cal pixels, and its output fis a I/-channel convolution of the image with kernels W1,..., w', hy ij =O (Si Wit Xuwrâ ia'-a) , where o is some scalar activation function. A max-pooling layer selects, for each channel / and pixel i, j in h, the maximum value among its neighbors N (i, 7), pinazpool "ij = maxyjren(i,j) h1iâ ,j". Typically, the neighbors N(i, 7) are chosen as a k x k image 2 patch around pixel i, 7. After max-pooling, the image is down-sampled by a constant factor d, com- monly 2 or 4, resulting in an output signal with â channels, m/d horizontal pixels, and n/d vertical pixels. CNNs are typically trained using stochastic gradient descent (SGD), with backpropagation for computing gradients. | 1602.02867#5 | 1602.02867#7 | 1602.02867 | [
"1602.02261"
] |
1602.02867#7 | Value Iteration Networks | Reinforcement Learning and Imitation Learning: In MDPs where the state space is very large or continuous, or when the MDP transitions or rewards are not known in advance, planning algorithms cannot be applied. In these cases, a policy can be learned from either expert supervision â IL, or by trial and error â RL. While the learning algorithms in both cases are different, the policy representations â which are the focus of this work â are similar. Additionally, most state-of-the-art algorithms such as [2: are agnostic to the policy representation, and only require it to be differentiable, for performing gradient descent on some algorithm-specific loss function. Therefore, in this paper we do not commit to a specific learning algorithm, and only consider the policy. Let ¢(s) denote an observation for state s. The policy is specified as a parametrized function tt9(a|(s)) mapping observations to a probability over actions, where @ are the policy parameters. For example, the policy could be represented as a neural network, with @ denoting the network weights. The goal is to tune the parameters such that the policy behaves well in the sense that to(a|@(s)) & x*(alb(s)), where 7* is the optimal policy for the MDP, as defined in Section[2] In IL, a dataset of N_ state observations and corresponding optimal actions {9(s'), a" ~ x*(0(s'))} 4 yn is generated by an expert. Learning a policy then becomes an instance of supervised learning [24] /9]. In RL, the optimal action is not available, but instead, the agent can act in the world and observe the rewards and state transitions its actions effect. RL algorithms such as in use these observations to improve the value of the policy. 3 The Value Iteration Network Model In this section we introduce a general policy representation that embeds an explicit planning module. As stated earlier, the motivation for such a representation is that a natural solution to many tasks, such as the path planning described above, involves planning on some model of the domain. Let M denote the MDP of the domain for which we design our policy 7. We assume that there is some unknown MDP M such that the optimal plan in M contains useful information about the optimal policy in the original task /. However, we emphasize that we do not assume to know M in advance. | 1602.02867#6 | 1602.02867#8 | 1602.02867 | [
"1602.02261"
] |
1602.02867#8 | Value Iteration Networks | Our idea is to equip the policy with the ability to learn and solve M, and to add the solution of M as an element in the policy 7. We hypothesize that this will lead to a policy that automatically learns a useful M to plan on. We denote by 5 â ¬ S,a â ¬ A, R(5,a), and P(8â |8, @) the states, actions, rewards, and transitions in M. To facilitate a connection between M and M, we let R and P depend on the observation in M, namely, R = fr((s)) and P = fp(¢(s)), and we will later learn the functions fr and fp as a part of the policy learning process. For example, in the grid-world domain described above, we can let MW have the same state and action spaces as the true grid-world MM. The reward function fz can map an image of the domain to a high reward at the goal, and negative reward near an obstacle, while fp can encode deterministic movements in the grid-world that do not depend on the observation. While these rewards and transitions are not necessarily the true rewards and transitions in the task, an optimal plan in M will still follow a trajectory that avoids obstacles and reaches the goal, similarly to the optimal plan in /. Once an MDP I has been specified, any standard planning algorithm can be used to obtain the value function V*. In the next section, we shall show that using a particular implementation of VI for planning has the advantage of being differentiable, and simple to implement within a NN framework. In this section however, we focus on how to use the planning result V* within the NN policy 7. Our approach is based on two important observations. The first is that the vector of values V*(s) Vs encodes all the information about the optimal plan in /. Thus, adding the vector V* as additional features to the policy 7 is sufficient for extracting information about the optimal plan in /. However, an additional property of V* is that the optimal decision 7*(3) at a state 5 can depend only on a subset of the values of V*, since 7*(3) = arg max, R(8,a) + y 05 P(3'|8,a)V*(8â | 1602.02867#7 | 1602.02867#9 | 1602.02867 | [
"1602.02261"
] |
1602.02867#9 | Value Iteration Networks | ). Therefore, if the MDP has a local connectivity structure, such as in the grid-world example above, the states for which P(8â |5,@) > 0 is a small subset of S. In NN terminology, this is a form of attention [32], in the sense that for a given label prediction (action), only a subset of the input features (value function) is relevant. Attention is known to improve learning performance by reducing the effective number of network parameters during learning. Therefore, the second element in our network is an attention module that outputs a vector of (attention | 1602.02867#8 | 1602.02867#10 | 1602.02867 | [
"1602.02261"
] |
1602.02867#10 | Value Iteration Networks | 3 modulated) values Ï (s). Finally, the vector Ï (s) is added as additional features to a reactive policy Ï re(a|Ï (s), Ï (s)). The full network architecture is depicted in Figure 2 (left). Returning to our grid-world example, at a particular state s, the reactive policy only needs to query the values of the states neighboring s in order to select the correct action. Thus, the attention module in this case could return a Ï (s) vector with a subset of ¯V â | 1602.02867#9 | 1602.02867#11 | 1602.02867 | [
"1602.02261"
] |
1602.02867#11 | Value Iteration Networks | for these neighboring states. Value Iteration Network VI Module ViModule R _JPlanon ve New Value p. |Mop M| Vv Observation] | =| rrr Bf fee -O (s) >| Attention Reactive Policy Hre(alo(s), Y(s)) K recurrence Value Iteration Network ViModule R _JPlanon ve p. |Mop M| Observation] | =| (s) >| Attention Reactive Policy Hre(alo(s), Y(s)) Figure 2: Planning-based NN models. Left: a general policy representation that adds value function features from a planner to a reactive policy. Right: VI module â a CNN representation of VI algorithm. Let θ denote all the parameters of the policy, namely, the parameters of fR, fP , and Ï re, and note that Ï (s) is in fact a function of Ï (s). Therefore, the policy can be written in the form Ï Î¸(a|Ï (s)), similarly to the standard policy form (cf. Section 2). If we could back-propagate through this function, then potentially we could train the policy using standard RL and IL algorithms, just like any other standard policy representation. While it is easy to design functions fR and fP that are differentiable (and we provide several examples in our experiments), back-propagating the gradient through the planning algorithm is not trivial. In the following, we propose a novel interpretation of an approximate VI algorithm as a particular form of a CNN. This allows us to conveniently treat the planning module as just another NN, and by back-propagating through it, we can train the whole policy end-to-end. # 3.1 The VI Module We now introduce the VI module â | 1602.02867#10 | 1602.02867#12 | 1602.02867 | [
"1602.02261"
] |
1602.02867#12 | Value Iteration Networks | a NN that encodes a differentiable planning computation. Our starting point is the VI algorithm (1). Our main observation is that each iteration of VI may be seen as passing the previous value function Vn and reward function R through a convolution layer and max-pooling layer. In this analogy, each channel in the convolution layer corresponds to the Q-function for a speciï¬ c action, and convolution kernel weights correspond to the discounted transition probabilities. Thus by recurrently applying a convolution layer K times, K iterations of VI are effectively performed. Following this idea, we propose the VI network module, as depicted in Figure2B. The inputs to the VI module is a â | 1602.02867#11 | 1602.02867#13 | 1602.02867 | [
"1602.02261"
] |
1602.02867#13 | Value Iteration Networks | reward imageâ R of dimensions 1, m,n, where here, for the purpose of clarity, we follow the CNN formulation and explicitly assume that the state space S maps to a 2-dimensional grid. However, our approach can be extended to general discrete state spaces, for example, a graph, as we report in the WikiNav experiment in Seton] T The reward is fed into a convolutional layer Q with A channels and a linear activation function, Q =u. j we, yh, i/â | 1602.02867#12 | 1602.02867#14 | 1602.02867 | [
"1602.02261"
] |
1602.02867#14 | Value Iteration Networks | ijâ â j- Each channel in this layer corresponds to Q(5, @) for a particular action @. This layer is then max-pooled along the actions channel to produce the next-iteration value function layer V, V; j = max, Q(a,i,j). The next-iteration value function layer V is then stacked with the reward R, and fed back into the convolutional layer and max-pooling layer K times, to perform K iterations of value iteration. The VI module is simply a NN architecture that has the capability of performing an approximate VI computation. Nevertheless, representing VI in this form makes learning the MDP parameters and reward function natural â by backpropagating through the network, similarly to a standard CNN. VI modules can also be composed hierarchically, by treating the value of one VI module as additional input to another VI module. We further report on this idea in the supplementary material. # 3.2 Value Iteration Networks We now have all the ingredients for a differentiable planning-based policy, which we term a value iteration network (VIN). The VIN is based on the general planning-based policy deï¬ ned above, with the VI module as the planning algorithm. In order to implement a VIN, one has to specify the state | 1602.02867#13 | 1602.02867#15 | 1602.02867 | [
"1602.02261"
] |
1602.02867#15 | Value Iteration Networks | 4 and action spaces for the planning module ¯S and ¯A, the reward and transition functions fR and fP , and the attention function; we refer to this as the VIN design. For some tasks, as we show in our experiments, it is relatively straightforward to select a suitable design, while other tasks may require more thought. However, we emphasize an important point: the reward, transitions, and attention can be deï¬ ned by parametric functions, and trained with the whole policy2. Thus, a rough design can be speciï¬ ed, and then ï¬ ne-tuned by end-to-end training. | 1602.02867#14 | 1602.02867#16 | 1602.02867 | [
"1602.02261"
] |
1602.02867#16 | Value Iteration Networks | Once a VIN design is chosen, implementing the VIN is straightforward, as it is simply a form of a CNN. The networks in our experiments all required only several lines of Theano [28] code. In the next section, we evaluate VIN policies on various domains, showing that by learning to plan, they achieve a better generalization capability. 4 Experiments In this section we evaluate VINs as policy representations on various domains. Additional experiments investigating RL and hierarchical VINs, as well as technical implementation details are discussed in the supplementary material. Source code is available at https://github.com/avivt/VIN. Our goal in these experiments is to investigate the following questions: 1. | 1602.02867#15 | 1602.02867#17 | 1602.02867 | [
"1602.02261"
] |
1602.02867#17 | Value Iteration Networks | Can VINs effectively learn a planning computation using standard RL and IL algorithms? 2. Does the planning computation learned by VINs make them better than reactive policies at generalizing to new domains? An additional goal is to point out several ideas for designing VINs for various tasks. While this is not an exhaustive list that ï¬ ts all domains, we hope that it will motivate creative designs in future work. 4.1 Grid-World Domain Our ï¬ rst experiment domain is a synthetic grid-world with randomly placed obstacles, in which the observation includes the position of the agent, and also an image of the map of obstacles and goal position. Figure 3 shows two random instances of such a grid-world of size 16 à 16. | 1602.02867#16 | 1602.02867#18 | 1602.02867 | [
"1602.02261"
] |
1602.02867#18 | Value Iteration Networks | We conjecture that by learning the optimal policy for several instances of this domain, a VIN policy would learn the planning computation required to solve a new, unseen, task. In such a simple domain, an optimal policy can easily be calculated using exact VI. Note, however, that here we are interested in evaluating whether a NN policy, trained using RL or IL, can learn to plan. In the following results, policies were trained using IL, by standard supervised learning from demonstrations of the optimal policy. In the supplementary material, we report additional RL experiments that show similar ï¬ ndings. We design a VIN for this task following the guidelines described above, where the planning MDP ¯M is a grid-world, similar to the true MDP. The reward mapping fR is a CNN mapping the image input to a reward map in the grid-world. Thus, fR should potentially learn to discriminate between obstacles, non-obstacles and the goal, and assign a suitable reward to each. | 1602.02867#17 | 1602.02867#19 | 1602.02867 | [
"1602.02261"
] |
1602.02867#19 | Value Iteration Networks | The transitions ¯P were deï¬ ned as 3 à 3 convolution kernels in the VI block, exploiting the fact that transitions in the grid-world are local3. The recurrence K was chosen in proportion to the grid-world size, to ensure that information can ï¬ ow from the goal state to any other state. For the attention module, we chose a trivial approach that selects the ¯Q values in the VI block for the current state, i.e., Ï (s) = ¯Q(s, ·). The ï¬ nal reactive policy is a fully connected network that maps Ï (s) to a probability over actions. We compare VINs to the following NN reactive policies: | 1602.02867#18 | 1602.02867#20 | 1602.02867 | [
"1602.02261"
] |
1602.02867#20 | Value Iteration Networks | CNN network: We devised a CNN-based reactive policy inspired by the recent impressive results of DQN [21], with 5 convolution layers, and a fully connected output. While the network in [21] was trained to predict Q values, our network outputs a probability over actions. These terms are related, since Ï â (s) = arg maxa Q(s, a). Fully Convolutional Network (FCN): The problem setting for this domain is similar to semantic segmentation [19], in which each pixel in the image is assigned a semantic label (the action in our case). We therefore devised an FCN inspired by a state-of-the-art semantic segmentation algorithm [19], with 3 convolution layers, where the ï¬ rst layer has a ï¬ lter that spans the whole image, to properly convey information from the goal to every other state. In Table 1 we present the average 0 â 1 prediction loss of each model, evaluated on a held-out test-set of maps with random obstacles, goals, and initial states, for different problem sizes. In addition, for each map, a full trajectory from the initial state was predicted, by iteratively rolling-out the next-states 2VINs are fundamentally different than inverse RL methods [22], where transitions are required to be known. 3Note that the transitions deï¬ ned this way do not depend on the state ¯s. Interestingly, we shall see that the network learned to plan successful trajectories nevertheless, by appropriately shaping the reward. | 1602.02867#19 | 1602.02867#21 | 1602.02867 | [
"1602.02261"
] |
1602.02867#21 | Value Iteration Networks | 5 Shortest path â *â Predicted path â *â Fredicted path Shortest path Shortest path â *â Fredicted path â *â Predicted path Shortest path Figure 3: Grid-world domains (best viewed in color). A,B: Two random instances of the 28 à 28 synthetic gridworld, with the VIN-predicted trajectories and ground-truth shortest paths between random start and goal positions. C: An image of the Mars domain, with points of elevation sharper than 10â ¦ colored in red. These points were calculated from a matching image of elevation data (not shown), and were not available to the learning algorithm. Note the difï¬ culty of distinguishing between obstacles and non-obstacles. D: | 1602.02867#20 | 1602.02867#22 | 1602.02867 | [
"1602.02261"
] |
1602.02867#22 | Value Iteration Networks | The VIN-predicted (purple line with cross markers), and the shortest-path ground truth (blue line) trajectories between between random start and goal positions. Domain 8 à 8 16 à 16 28 à 28 Prediction loss 0.004 0.05 0.11 VIN Success rate Traj. diff. 99.6% 0.001 99.3% 0.089 97% 0.086 Pred. loss 0.02 0.10 0.13 CNN Succ. rate Traj. diff. 97.9% 0.006 87.6% 0.06 74.2% 0.078 Pred. loss 0.01 0.07 0.09 FCN Succ. rate Traj. diff. 97.3% 0.004 88.3% 0.05 76.6% 0.08 Table 1: Performance on grid-world domain. Top: comparison with reactive policies. For all domain sizes, VIN networks signiï¬ cantly outperform standard reactive networks. Note that the performance gap increases dramatically with problem size. predicted by the network. | 1602.02867#21 | 1602.02867#23 | 1602.02867 | [
"1602.02261"
] |
1602.02867#23 | Value Iteration Networks | A trajectory was said to succeed if it reached the goal without hitting obstacles. For each trajectory that succeeded, we also measured its difference in length from the optimal trajectory. The average difference and the average success rate are reported in Table 1. Clearly, VIN policies generalize to domains outside the training set. A visualization of the reward mapping fR (see supplementary material) shows that it is negative at obstacles, positive at the goal, and a small negative constant otherwise. The resulting value function has a gradient pointing towards a direction to the goal around obstacles, thus a useful planning computation was learned. | 1602.02867#22 | 1602.02867#24 | 1602.02867 | [
"1602.02261"
] |
1602.02867#24 | Value Iteration Networks | VINs also signiï¬ cantly outperform the reactive networks, and the performance gap increases dramatically with the problem size. Importantly, note that the prediction loss for the reactive policies is comparable to the VINs, although their success rate is signiï¬ cantly worse. This shows that this is not a standard case of overï¬ tting/underï¬ tting of the reactive policies. Rather, VIN policies, by their VI structure, focus prediction errors on less important parts of the trajectory, while reactive policies do not make this distinction, and learn the easily predictable parts of the trajectory yet fail on the complete task. The VINs have an effective depth of K, which is larger than the depth of the reactive policies. One may wonder, whether any deep enough network would learn to plan. In principle, a CNN or FCN of depth K has the potential to perform the same computation as a VIN. However, it has much more parameters, requiring much more training data. We evaluate this by untying the weights in the K recurrent layers in the VIN. Our results, reported in the supplementary material, show that untying the weights degrades performance, with a stronger effect for smaller sizes of training data. 4.2 Mars Rover Navigation In this experiment we show that VINs can learn to plan from natural image input. We demonstrate this on path-planning from overhead terrain images of a Mars landscape. | 1602.02867#23 | 1602.02867#25 | 1602.02867 | [
"1602.02261"
] |
1602.02867#25 | Value Iteration Networks | Each domain is represented by a 128 à 128 image patch, on which we deï¬ ned a 16 à 16 grid-world, where each state was considered an obstacle if the terrain in its corresponding 8 à 8 image patch contained an elevation angle of 10 degrees or more, evaluated using an external elevation data base. An example of the domain and terrain image is depicted in Figure 3. The MDP for shortest-path planning in this case is similar to the grid-world domain of Section 4.1, and the VIN design was similar, only with a deeper CNN in the reward mapping fR for processing the image. The policy was trained to predict the shortest-path directly from the terrain image. We emphasize that the elevation data is not part of the input, and must be inferred (if needed) from the terrain image. | 1602.02867#24 | 1602.02867#26 | 1602.02867 | [
"1602.02261"
] |
1602.02867#26 | Value Iteration Networks | 6 After training, VIN achieved a success rate of 84.8%. To put this rate in context, we compare with the best performance achievable without access to the elevation data, which is 90.3%. To make this comparison, we trained a CNN to classify whether an 8 à 8 patch is an obstacle or not. This classiï¬ er was trained using the same image data as the VIN network, but its labels were the true obstacle classiï¬ cations from the elevation map (we reiterate that the VIN did not have access to these ground-truth obstacle labels during training or testing). The success rate of planner that uses the obstacle map generated by this classiï¬ er from the raw image is 90.3%, showing that obstacle identiï¬ cation from the raw image is indeed challenging. Thus, the success rate of the VIN, which was trained without any obstacle labels, and had to â ï¬ gure outâ | 1602.02867#25 | 1602.02867#27 | 1602.02867 | [
"1602.02261"
] |
1602.02867#27 | Value Iteration Networks | the planning process is quite remarkable. 4.3 Continuous Control We now consider a 2D path planning domain with continuous states and continuous actions, which cannot be solved using VI, and therefore a VIN cannot be naively applied. Instead, we will construct the VIN to perform â high-levelâ planning on a discrete, coarse, grid-world rep- resentation of the continuous domain. We shall show that a VIN can learn to plan such a â high- levelâ plan, and also exploit that plan within its â low-levelâ continuous control policy. Moreover, the VIN policy results in better generalization than a reactive policy. Consider the domain in Figure 4. | 1602.02867#26 | 1602.02867#28 | 1602.02867 | [
"1602.02261"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.