id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1608.06993#40
Densely Connected Convolutional Networks
A neural algorithm of artistic style. Nature Communications, 2015. 8 [6] X. Glorot, A. Bordes, and Y. Bengio. Deep sparse rectiï¬ er neural networks. In AISTATS, 2011. 3 [7] I. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. In ICML, 2013. 5 [8] S. Gross and M. Wilber. Training and investigating residual nets, 2016. 5, 6 [9] B. Hariharan, P. Arbeláez, R. Girshick, and J.
1608.06993#39
1608.06993#41
1608.06993
[ "1605.07716" ]
1608.06993#41
Densely Connected Convolutional Networks
Malik. Hyper- columns for object segmentation and ï¬ ne-grained localiza- tion. In CVPR, 2015. 2 [10] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectiï¬ ers: Surpassing human-level performance on imagenet classiï¬ cation. In ICCV, 2015. 5 [11] K. He, X. Zhang, S. Ren, and J. Sun.
1608.06993#40
1608.06993#42
1608.06993
[ "1605.07716" ]
1608.06993#42
Densely Connected Convolutional Networks
Deep residual learning for image recognition. In CVPR, 2016. 1, 2, 3, 4, 5, 6 [12] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In ECCV, 2016. 2, 3, 5, 7 [13] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Q. Weinberger.
1608.06993#41
1608.06993#43
1608.06993
[ "1605.07716" ]
1608.06993#43
Densely Connected Convolutional Networks
Deep networks with stochastic depth. In ECCV, 2016. 1, 2, 5, 7 [14] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. 3 [15] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Tech Report, 2009. 5 [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. classiï¬ cation with deep convolutional neural networks. NIPS, 2012. 3, 7 Imagenet In
1608.06993#42
1608.06993#44
1608.06993
[ "1605.07716" ]
1608.06993#44
Densely Connected Convolutional Networks
[17] G. Larsson, M. Maire, and G. Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648, 2016. 1, 3, 5, 6 [18] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel.
1608.06993#43
1608.06993#45
1608.06993
[ "1605.07716" ]
1608.06993#45
Densely Connected Convolutional Networks
Backpropagation applied to handwritten zip code recognition. Neural compu- tation, 1(4):541â 551, 1989. 1 [19] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient- based learning applied to document recognition. Proceed- ings of the IEEE, 86(11):2278â 2324, 1998. 1, 3 [20] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply- supervised nets. In AISTATS, 2015. 2, 3, 5, 7 [21] Q. Liao and T. Poggio.
1608.06993#44
1608.06993#46
1608.06993
[ "1605.07716" ]
1608.06993#46
Densely Connected Convolutional Networks
Bridging the gaps between residual learning, recurrent neural networks and visual cortex. arXiv preprint arXiv:1604.03640, 2016. 2 [22] M. Lin, Q. Chen, and S. Yan. Network in network. In ICLR, 2014. 3, 5 [23] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. 2
1608.06993#45
1608.06993#47
1608.06993
[ "1605.07716" ]
1608.06993#47
Densely Connected Convolutional Networks
[24] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised fea- ture learning, 2011. In NIPS Workshop, 2011. 5 [25] M. Pezeshki, L. Fan, P. Brakel, A. Courville, and Y. Bengio. In ICML, Deconstructing the ladder network architecture. 2016. 3 [26] G. Pleiss, D. Chen, G. Huang, T. Li, L. van der Maaten, and K. Q. Weinberger. Memory-efï¬
1608.06993#46
1608.06993#48
1608.06993
[ "1605.07716" ]
1608.06993#48
Densely Connected Convolutional Networks
cient implementation of densenets. arXiv preprint arXiv:1707.06990, 2017. 5 [27] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. Semi-supervised learning with ladder networks. In NIPS, 2015. 3 [28] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets: Hints for thin deep nets. In ICLR, 2015. 5 [29] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al.
1608.06993#47
1608.06993#49
1608.06993
[ "1605.07716" ]
1608.06993#49
Densely Connected Convolutional Networks
Imagenet large scale visual recognition challenge. IJCV. 1, 7 [30] P. Sermanet, S. Chintala, and Y. LeCun. Convolutional neu- ral networks applied to house numbers digit classiï¬ cation. In ICPR, pages 3288â 3291. IEEE, 2012. 5 [31] P. Sermanet, K. Kavukcuoglu, S. Chintala, and Y. LeCun.
1608.06993#48
1608.06993#50
1608.06993
[ "1605.07716" ]
1608.06993#50
Densely Connected Convolutional Networks
Pedestrian detection with unsupervised multi-stage feature learning. In CVPR, 2013. 2 [32] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Ried- miller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014. 5 [33] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov.
1608.06993#49
1608.06993#51
1608.06993
[ "1605.07716" ]
1608.06993#51
Densely Connected Convolutional Networks
Dropout: a simple way to prevent neural networks from overï¬ tting. JMLR, 2014. 6 [34] R. K. Srivastava, K. Greff, and J. Schmidhuber. Training very deep networks. In NIPS, 2015. 1, 2, 5 [35] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In ICML, 2013. 5
1608.06993#50
1608.06993#52
1608.06993
[ "1605.07716" ]
1608.06993#52
Densely Connected Convolutional Networks
[36] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015. 2, 3 [37] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z.
1608.06993#51
1608.06993#53
1608.06993
[ "1605.07716" ]
1608.06993#53
Densely Connected Convolutional Networks
Wojna. Rethinking the inception architecture for computer vision. In CVPR, 2016. 2, 3, 4 [38] S. Targ, D. Almeida, in resnet: Generalizing residual architectures. arXiv preprint arXiv:1603.08029, 2016. 2 [39] J. Wang, Z. Wei, T. Zhang, and W. Zeng. Deeply-fused nets. arXiv preprint arXiv:1605.07716, 2016. 3 [40] B. M. Wilamowski and H. Yu.
1608.06993#52
1608.06993#54
1608.06993
[ "1605.07716" ]
1608.06993#54
Densely Connected Convolutional Networks
Neural network learning without backpropagation. IEEE Transactions on Neural Net- works, 21(11):1793â 1803, 2010. 2 [41] S. Yang and D. Ramanan. Multi-scale recognition with dag- cnns. In ICCV, 2015. 2 [42] S. Zagoruyko and N. Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. 3, 5, 6 [43] Y. Zhang, K. Lee, and H. Lee.
1608.06993#53
1608.06993#55
1608.06993
[ "1605.07716" ]
1608.06993#55
Densely Connected Convolutional Networks
Augmenting supervised neural networks with unsupervised objectives for large-scale image classiï¬ cation. In ICML, 2016. 3
1608.06993#54
1608.06993
[ "1605.07716" ]
1608.04868#0
Towards Music Captioning: Generating Music Playlist Descriptions
7 1 0 2 n a J 5 1 ] M M . s c [ 2 v 8 6 8 4 0 . 8 0 6 1 : v i X r a # TOWARDS MUSIC CAPTIONING: GENERATING MUSIC PLAYLIST DESCRIPTIONS Keunwoo Choi, Gy¨orgy Fazekas, Mark Sandler Centre for Digital Music Queen Mary University of London [email protected] Brian McFee, Kyunghyun Cho Center for Data Science New York University {first.last}@nyu.edu # ABSTRACT Descriptions are often provided along with recommenda- tions to help usersâ discovery. Recommending automati- cally generated music playlists (e.g. personalised playlists) introduces the problem of generating descriptions. In this paper, we propose a method for generating music playlist descriptions, which is called as music captioning. In the proposed method, audio content analysis and natural lan- guage processing are adopted to utilise the information of each track.
1608.04868#1
1608.04868
[ "1507.07998" ]
1608.04868#1
Towards Music Captioning: Generating Music Playlist Descriptions
y Ly eH} Go x | am _ hungry Figure 1. A block diagram of an RNN unit (left) and sequence-to-sequence module that is applied to English- Korean translation (right). # 1. INTRODUCTION Motivation: One of the crucial problems in music discov- ery is to deliver the summary of music without playing it. One common method is to add descriptions of a music item or playlist, e.g. Getting emotional with the undisputed King of Pop 1 , Just the right blend of chilled-out acoustic songs to work, relax, think, and dream to 2 . These exam- ples show that they are more than simple descriptions and even add value to the curated playlist as a product. There have been attempts to automate the generation of these descriptions. In [8], Eck et al. proposed to use social tags to describe each music item. Fields proposed a similar idea for playlist using social tag and topic model [9] using Latent Dirichlet Allocation [1]. Besides text, Bogdanov in- troduced music avatars, whose outlook - hair style, clothes, and accessories - describes the recommended music [2].
1608.04868#0
1608.04868#2
1608.04868
[ "1507.07998" ]
1608.04868#2
Towards Music Captioning: Generating Music Playlist Descriptions
â ¢ Seq2seq: Sequence-to-sequence (seq2seq) learning in- dicates training a model whose input and output are se- quences (Figure 1, right). Seq2seq models can be used to machine translation, where a phrase in a language is sum- marised by an encoder RNN, which is followed by a de- coder RNN to generate a phrase in another language [4]. â ¢ Word2vec: Word embeddings are distributed vector representations of words that aim to preserve the seman- tic relationships among words. One successful example is word2vec algorithm, which is usually trained with large corpora in an unsupervised manner [13].
1608.04868#1
1608.04868#3
1608.04868
[ "1507.07998" ]
1608.04868#3
Towards Music Captioning: Generating Music Playlist Descriptions
â ¢ ConvNets: Convolutional neural networks (ConvNets) have been extensively adopted in nearly every computer vision task and algorithm since the record-breaking per- formance of AlexNet [12]. ConvNets also show state-of- the-art results in many music information retrieval tasks including auto-tagging [5]. Background: â ¢ RNNs: RNNs are neural networks that have a unit with a recurrent connection, whose output is connect to the input of the unit (Figure 1, left). They cur- rently show state-of-the-art performances in tasks that in- volve sequence modelling. Two types of RNN unit are widely used: Long Short-Term Memory (LSTM) unit [10] and Gated Recurrent Unit (GRU) [3]. # 2. PROBLEM DEFINITION The problem of music captioning can be deï¬ ned as gener- ating a description for a set of music items using on their audio content and text data. When the set includes more than one item, it can be also called as music playlist cap- tioning.
1608.04868#2
1608.04868#4
1608.04868
[ "1507.07998" ]
1608.04868#4
Towards Music Captioning: Generating Music Playlist Descriptions
1 Michael Jackson: Love songs and ballads by Apple Music 2 Your Coffee Break by Spotify # 3. THE PROPOSED METHOD © Keunwoo Choi, Gyérgy Fazekas, Mark Sandler, Brian McFee, Kyunghyun Cho. Licensed under a Creative Commons Attribu- tion 4.0 International License (CC BY 4.0). Attribution: © Keunwoo Choi, Gyérgy Fazekas, Mark Sandler, Brian McFee, Kyunghyun Cho. â Towards Music Captioning: Generating Music Playlist Descriptionsâ , Extended abstracts for the Late-Breaking Demo Session of the 17th In- ternational Society for Music Information Retrieval Conference, 2016.
1608.04868#3
1608.04868#5
1608.04868
[ "1507.07998" ]
1608.04868#5
Towards Music Captioning: Generating Music Playlist Descriptions
Both of the approaches use sequence-to-sequence model, as illustrated in Figure 2. In the sequence-to-sequence model, the encoder consists of two-layer RNN with GRU and en- codes the track features into a vector, i.e., the encoded vec- tor summarises the information of the input. This vector is also called context vector because it provides context 1 2 3 yi wl w2 w â AT sepseq | 1 t2 audio text audio text Figure 2. The diagrams of two proposed approaches, where coloured blocks indicate trainable modules. The ï¬ rst approach uses a pre-trained ConvNet (conv) and word2vec (w2v) and only sequence-to-sequence model is trained. In the second approach, the whole blocks are trained - a ConvNet to summarise the audio content, an RNN to summarise the text data of each track. An addi- tional labels (y) such as genres or tags can be provided to help the training. information to the decoder. The decoder consists of two- layer RNN with GRU and decodes the context vector to a sequence of word or word embeddings. The models are written in Keras and uploaded online 3 [6]. # 3.1 Pre-training approach This approach takes advantage of a pre-trained word em- bedding model 4 and a pre-trained auto-tagger 5 . There- fore, the number of parameters to learn is reduced while leveraging additional data to train word-embedding and auto-tagger. Each data sample consists of a sequence of N track features as input and an output word sequence length of M , which is an album feature. Input/Outpu{*} A n-th track feature, t® â ¬ R°°°, rep- resents one track and is created by concatenating the audio feature, t? â ¬ R°°, and the word feature, t®, â ¬ R°°°, ie. t =[ta;ty]. For computing ta, a convolutional neural net- work that is trained to predict tags is used to output 50-dim vector for each track [5]. tw is computed by }>,, wx/K, where w;, refers to the embedding of k-th word in the metadatg ' | The word embedding were trained by word2vec algorithms and Google news dataset [13].
1608.04868#4
1608.04868#6
1608.04868
[ "1507.07998" ]
1608.04868#6
Towards Music Captioning: Generating Music Playlist Descriptions
An playlist feature is a sequence of word embeddings of the playlist description, i.e. p = [wm]m=0,1,..mâ 1. # 3 http://github.com/keunwoochoi/ # ismir2016-ldb-audio-captioning-model-keras 4 https://radimrehurek.com/gensim/models/ # word2vec.html # 5 https://github.com/keunwoochoi/music-auto_ tagging-keras, [5]
1608.04868#5
1608.04868#7
1608.04868
[ "1507.07998" ]
1608.04868#7
Towards Music Captioning: Generating Music Playlist Descriptions
6 The dimensions can vary, we describe in details for better under- standing. 7 Because these word embeddings are distributed representations in a semantic vector space, average of the words can summarise a bag of words and was used as a baseline in sentence and paragraph representa- tion [7]. # 3.2 Fully-training approach The model in this approach includes the training of a Con- vNet for audio summarisation and an RNN for text sum- marisation of each track. The structure of ConvNet can be similar to the pre-trained one. The RNN module is trained to summarise the text of each track and outputs a sentence vector. These networks can be provided with additional labels (notated as y in the ï¬ gure 2) to help the training, e.g., genres or tags. In that case, the objective of the whole structure consists of two different tasks and therefore the training can be more regulated and stable. Since the audio and text summarisation modules are trainable, they can be more relevant to the captioning task. However, this ï¬ exibility requires more training data. # 4. EXPERIMENTS AND CONCLUSIONS
1608.04868#6
1608.04868#8
1608.04868
[ "1507.07998" ]
1608.04868#8
Towards Music Captioning: Generating Music Playlist Descriptions
We tested the pre-training approach with a private pro- duction music dataset. The dataset has 374 albums and 17,354 tracks with descriptions of tracks, albums, audio signal and metadata. The learning rate is controlled by ADAM [11] with an objective function of 1-cosine prox- imity. The model was trained to predict the album descrip- tions. The model currently overï¬ ts and fails to generate cor- rect sentences. One example of generated word sequence is dramatic motivating the intense epic action adventure soaring soaring soaring gloriously Roger Deakins cinematography Maryse Alberti. This is expected since there are only 374 output sequences in the dataset â if we use early stopping, the model underï¬ ts, otherwise it overï¬ ts. In the future, we plan to solve the current problem â lack of data.
1608.04868#7
1608.04868#9
1608.04868
[ "1507.07998" ]
1608.04868#9
Towards Music Captioning: Generating Music Playlist Descriptions
The sentence generation can be partly trained by (music) corpora. A word2vec model that is trained with music corpora can be used to reduce the embedding di- mension [14]. The model can also be modiï¬ ed in the sense that the audio feature is optional and it mainly relies on metadata. In that case, acquisition of training data becomes more feasible. # 5. ACKNOWLEDGEMENTS This work was part funded by the FAST IMPACt EPSRC Grant EP/L019981/1 and the European Commission H2020 research and innovation grant AudioCommons (688382). Mark Sandler acknowledges the support of the Royal So- ciety as a recipient of a Wolfson Research Merit Award. Brian McFee is supported by the Moore Sloan Data Sci- ence Environment at NYU. Kyunghyun Cho thanks the support by Facebook, Google (Google Faculty Award 2016) and NVidia (GPU Center of Excellence 2015-2016). The work is done during Keunwoo Choi is visiting Center for Data Science in New York University.
1608.04868#8
1608.04868#10
1608.04868
[ "1507.07998" ]
1608.04868#10
Towards Music Captioning: Generating Music Playlist Descriptions
# 6. REFERENCES [1] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of machine Learn- ing research, 3(Jan):993â 1022, 2003. [2] Dmitry Bogdanov, Mart´ıN Haro, Ferdinand Fuhrmann, Anna Xamb´o, Emilia G´omez, and Perfecto Herrera. Semantic audio content-based music recommendation and visualization based on user preference examples. Information Processing & Management, 49(1):13â 33, 2013. [3] Kyunghyun Cho, Bart Van Merri¨enboer, Dzmitry Bah- danau, and Yoshua Bengio.
1608.04868#9
1608.04868#11
1608.04868
[ "1507.07998" ]
1608.04868#11
Towards Music Captioning: Generating Music Playlist Descriptions
On the properties of neu- ral machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014. [4] Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase rep- resentations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. [5] Keunwoo Choi, George Fazekas, and Mark Sandler. Automatic tagging using deep convolutional neural networks. In International Society of Music Informa- tion Retrieval Conference. ISMIR, 2016. [6] Franc¸ois Chollet.
1608.04868#10
1608.04868#12
1608.04868
[ "1507.07998" ]
1608.04868#12
Towards Music Captioning: Generating Music Playlist Descriptions
Keras. GitHub https://github. com/fchollet/keras, 2015. repository: [7] Andrew M Dai, Christopher Olah, and Quoc V Le. Document embedding with paragraph vectors. arXiv preprint arXiv:1507.07998, 2015. [8] Douglas Eck, Paul Lamere, Thierry Bertin-Mahieux, and Stephen Green. Automatic generation of social tags for music recommendation. In Advances in neural information processing systems, pages 385â 392, 2008. [9] Ben Fields, Christophe Rhodes, Mark dâ Inverno, et al.
1608.04868#11
1608.04868#13
1608.04868
[ "1507.07998" ]
1608.04868#13
Towards Music Captioning: Generating Music Playlist Descriptions
Using song social tags and topic models to describe and compare playlists. In 1st Workshop On Music Recom- mendation And Discovery (WOMRAD), ACM RecSys, 2010, Barcelona, Spain, 2010. [10] Sepp Hochreiter and J¨urgen Schmidhuber. Long short- term memory. Neural computation, 9(8):1735â 1780, 1997. A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [12] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. Imagenet classiï¬ cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â 1105, 2012. [13] T Mikolov and J Dean.
1608.04868#12
1608.04868#14
1608.04868
[ "1507.07998" ]
1608.04868#14
Towards Music Captioning: Generating Music Playlist Descriptions
Distributed representations of words and phrases and their compositionality. Ad- vances in neural information processing systems, 2013. [14] Sergio Oramas, Luies Espinosa-Anke, Shuo Zhang, Horacio Saggion, and Xavier Serra. Natural language processing for music information retrieval. In 17th In- ternational Society for Music Information Retrieval Conference (ISMIR 2016), 2016.
1608.04868#13
1608.04868
[ "1507.07998" ]
1608.04337#0
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
7 1 0 2 n a J 4 2 ] V C . s c [ 2 v 7 3 3 4 0 . 8 0 6 1 : v i X r a # Design of Efï¬ cient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial â Bottleneckâ Structure Min Wang Department of EECS University of Central Florida Orlando, FL 32816 [email protected] Baoyuan Liu Department of EECS University of Central Florida Orlando, FL 32816 [email protected] Hassan Foroosh Department of EECS University of Central Florida Orlando, FL 32816 [email protected] # Abstract
1608.04337#1
1608.04337
[ "1502.03167" ]
1608.04337#1
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
Deep convolutional neural networks achieve remarkable visual recognition performance, at the cost of high compu- tational complexity. In this paper, we have a new design of efï¬ cient convolutional layers based on three schemes. The 3D convolution operation in a convolutional layer can be considered as performing spatial convolution in each chan- nel and linear projection across channels simultaneously. By unravelling them and arranging the spatial convolu- tion sequentially, the proposed layer is composed of a sin- gle intra-channel convolution, of which the computation is negligible, and a linear channel projection. A topological subdivisioning is adopted to reduce the connection between the input channels and output channels. Additionally, we also introduce a spatial â bottleneckâ structure that utilizes a convolution-projection-deconvolution pipeline to take ad- vantage of the correlation between adjacent pixels in the input. Our experiments demonstrate that the proposed lay- ers remarkably outperform the standard convolutional lay- ers with regard to accuracy/complexity ratio. Our models achieve similar accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less computation respec- tively. consuming building block of the CNN, the convolutional layer, is performed by convolving the 3D input data with a series of 3D kernels. The computational complexity is quadratic in both the kernel size and the number of chan- nels. To achieve state-of-the-art performance, the number of channels needs to be a few hundred, especially for the layers with smaller spatial input dimension, and the kernel size is generally no less than 3. Several attempts have been made to reduce the amount of computation and parameters in both convolutional lay- ers and fully connected layers. Low rank decomposi- tion has been extensively explored in various fashions [7][8][9][10][11] to obtain moderate efï¬ ciency improve- ment. Sparse decomposition based methods [12][13] achieve higher theoretical reduction of complexity, while the actual speedup is bounded by the efï¬ ciency of sparse multiplication implementations. Most of these decomposition-based methods start from a pre-trained model, and perform decomposition and ï¬ ne-tuning based on it, while trying to maintain similar accuracy. This essen- tially precludes the option of improving efï¬ ciency by de- signing and training new CNN models from scratch.
1608.04337#0
1608.04337#2
1608.04337
[ "1502.03167" ]
1608.04337#2
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
# 1. Introduction Deep convolutional neural networks (CNN) have made signiï¬ cant improvement on solving visual recognition prob- lems since the famous work by Krizhevsky et al. in 2012 [1][2][3][4][5]. Thanks to their deep structure, vision ori- ented layer designs, and efï¬ cient training schemes, recent CNN models from Google [4] and MSRA [5] obtain better than human level accuracy on ImageNet ILSVRC dataset [6]. The computational complexity for the state-of-the-art models for both training and inference are extremely high, requiring several GPUs or cluster of CPUs.
1608.04337#1
1608.04337#3
1608.04337
[ "1502.03167" ]
1608.04337#3
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
The most time- On the other hand, in recent state-of-the-art deep CNN models, several heuristics are adopted to alleviate the bur- den of heavy computation. In [2], the number of channels are reduced by a linear projection before the actual convolu- tional layer; In [5], the authors utilize a bottleneck structure, in which both the input and the output channels are reduced by linear projection; In [4], 1à n and nà 1 asymmetric con- volutions are adopted to achieve larger kernel sizes. While these strategies to some extent help to design moderately ef- ï¬ cient and deep models in practice, they are not able to pro- vide a comprehensive analysis of optimizing the efï¬ ciency of the convolutional layer. In this work, we propose several schemes to improve the efï¬ ciency of convolutional layers. In standard convolu- tional layers, the 3D convolution can be considered as per- forming intra-channel spatial convolution and linear chan- nel projection simultaneously, leading to highly redundant
1608.04337#2
1608.04337#4
1608.04337
[ "1502.03167" ]
1608.04337#4
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
1 ReLU (a) Standard Convolutional Layer computation. These two operations are ï¬ rst unraveled to a set of 2D convolutions in each channel and subsequent lin- ear channel projection. Then, we make the further mod- iï¬ cation of performing the 2D convolutions sequentially In this way, we obtain a single rather than in parallel. intra-channel convolutional (SIC) layer that involves only one ï¬ lter for each input channel and linear channel projec- tion, thus achieving signiï¬ cantly reduced complexity. By stacking multiple SIC layers, we can train models that are several times more efï¬ cient with similar or higher accuracy than models based on standard convolutional layer. In a SIC layer, linear channel projection consumes the majority of the computation. To reduce its complexity, we propose a topological subdivisioning framework between the input channels and output channels as follows: The in- put channels and the output channels are ï¬ rst rearranged into a s-dimensional tensor, then each output channel is only connected to the input channels that are within its local neighborhood. Such a framework leads to a regular sparsity pattern of the convolutional kernels, which is shown to pos- sess a better performance/cost ratio than standard convolu- tional layer in our experiments. Furthermore, we design a spatial â bottleneckâ structure to take advantage of the local correlation of adjacent pix- els in the input. The spatial dimensions are ï¬ rst reduced by intra-channel convolution with stride, then recovered by de- convolution with the same stride after linear channel projec- tion. Such a design reduces the complexity of linear channel projection without sacriï¬ cing the spatial resolution. (b) Single Intra-Channel Convolutional Layer Figure 1. Illustration of the convolution pipeline of standard con- volutional layer and Single Intra-channel Convolutional Layer. In SIC layer, only one 2D ï¬ lter is convolved with each input channel. The above three schemes (SIC layer, topological subdi- visioning and spatial â bottleneckâ structure) attempt to im- prove the efï¬ ciency of traditional CNN models from dif- ferent perspectives, and can be easily combined together to achieve lower complexity as demonstrated thoroughly in the remainder of this paper.
1608.04337#3
1608.04337#5
1608.04337
[ "1502.03167" ]
1608.04337#5
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
Each of these schemes will be ex- plained in detail in Section 2, evaluated against traditional CNN models, and analyzed in Section 3. # 2.1. Standard Convolutional Layer Consider the input data I in Rhà wà n, where h, w and n are the height, width and the number of channels of the input feature maps, and the convolutional kernel K in Rkà kà nà n, where k is size of the convolutional kernel and n is the number of output channels. The operation of a stan- dard convolutional layer O â
1608.04337#4
1608.04337#6
1608.04337
[ "1502.03167" ]
1608.04337#6
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
Rhà wà n = K â I is given by Algorithm 1. The complexity of a convolutional layer mea- sured by the number of multiplications is # 2. Method n2k2hw (1) In this section, we ï¬ rst review the standard convolutional layer, then introduce the proposed schemes. For the purpose of easy understanding, the ï¬ rst two schemes are explained with mathematical equations and pseudo-code, as well as illustrated with graphical visualization in Figure 5. Since the complexity is quadratic with the kernel size, in most recent CNN models, the kernel size is limited to 3 à 3 to control the overall running time. # 2.2.
1608.04337#5
1608.04337#7
1608.04337
[ "1502.03167" ]
1608.04337#7
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
Single Intra-Channel Convolutional Layer We make the assumption that the number of output chan- nels is equal to the number of input channels, and the in- put is padded so that the spatial dimensions of output is the same as input. We also assume that the residual learning technique is applied to each convolutional layer, namely the input is directly added to the output since they have the same dimension. In standard convolutional layers, the output features are produced by convolving a group of 3D kernels with the in- put features along the spatial dimensions. Such a 3D con- volution operation can be considered as a combination of 2D spatial convolution inside each channel and linear pro- jection across channels. For each output channel, a spatial Algorithm 1: Standard Convolutional Layer Input:
1608.04337#6
1608.04337#8
1608.04337
[ "1502.03167" ]
1608.04337#8
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
I ¢ Râ *exâ Parameter: K â ¬ R***xnxn Intermediate Data: I ¢ R(+#â -1)x(wtk-Dxn Output: O ¢ Râ *ex" I = zero-padding(I, "5+) for y = 1toh, x=1tow, 7 =1tondo Oly, 2.5) = nek LYLE Ku, v, i,j (yt+u-l,a+v-1,i) i=1 u=lv=1 end convolution is performed on each input channel. The spatial convolution is able to capture local structural information, while the linear projection transforms the feature space for learning the necessary non-linearity in the neuron layers. When the number of input and output channels is large, typ- ically hundreds, such a 3D convolutional layer requires an exorbitant amount of computation. A natural idea is, the 2D spatial convolution and linear channel projection can be unraveled and performed sepa- rately. Each input channel is ï¬ rst convolved with b 2D ï¬ lters, generating intermediate features that have b times channels of the input. Then the output is generated by lin- ear channel projection. Unravelling these two operations provides us more freedom of model design by tuning both b and k. The complexity of such a layer is b(nk2 + n2)hw (2) Typically, k is much smaller than n. The complexity is approximately linear with b. When b = k2, this is equiva- lent to a linear decomposition of the standard convolutional layers [12]. When b < k2, the complexity is lower than the standard convolutional layer in a low-rank fashion. Our key observation is that instead of convolving b 2D ï¬ lters with each input channel simultaneously, we can per- form the convolutions sequentially. The above convolu- tional layer with b ï¬ lters can be transformed to a frame- work that has b layers. In each layer, each input channel is ï¬ rst convolved with single 2D ï¬ lter, then a linear pro- jection is applied to all the input channels to generate the output channels. In this way, the number of channels are maintained the same throughout all b layers. Algorithm. 2 formally describes this framework.
1608.04337#7
1608.04337#9
1608.04337
[ "1502.03167" ]
1608.04337#9
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
When we consider each of the b layers, only one k à k kernel is convolved with each input channel. This seems to be a risky choice. Convolving with only one ï¬ lter will not be able to preserve all the information from the input data, and there is very little freedom to learn all the useful local structures. Actually, this will probably lead to a low pass ï¬ lter, which is somewhat equivalent to the ï¬ rst principal component of the image. However, the existence of resid- ual learning module helps to overcome this disadvantage. With residual learning, the input is added to the output. The subsequent layers thus receive information from both the initial input and the output of preceding layers. Figure. 5 presents a visual comparison between the proposed method and standard convolutional layer. Algorithm 2: Single Intra-Channel Convolutional Layer Input:
1608.04337#8
1608.04337#10
1608.04337
[ "1502.03167" ]
1608.04337#10
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
I â Rhà wà n Parameter: K â Rkà kà n, P â Rnà n Intermediate Data: Ë I â R(h+kâ 1)à (w+kâ 1)à n, Input: Ie Rhxwxn Parameter: K â ¬ R****", P Ee R"â ¢â Intermediate Data: I ¢ R(!tk-))x(w+k-Dxn, Ge Rrxwxn Output: Oe Rhxwxn O=I // Initialize output as input I = zero-padding(I, a) fori=1tobdo// Repeat this layer b times for y =1toh, x=1tow, j =1tondo Gly, x, 3) = > SOK(u,v, Ay +u-â l1,¢+vâ -1,j) u=1v=1 end for y =1toh, x=1tow, 1=1tondo Oly, 2,1) = O(y, 2,1) + > Gly, 2,7) PG.) j=l end O = max(O,0) // ReLU I = zero-padding(O, at) end # 2.3. Topologica Subdivisioning Given that the standard convolutional layer boils down to single intra-channel convolution and linear projection in the SIC layer, we make further attempt to reduce the com- plexity of linear projection. In [12], the authors proved that extremely high sparsity could be accomplished without sac- riï¬ cing accuracy. While the sparsity was obtained by ï¬ ne- tuning and did not possess any structure, we study to build the sparsity with more regularity. Inspired by the topolog- ical ICA framework in [14], we propose a s-dimensional topological subdivisioning between the input and output channels in the convolutional layers. Assuming the number of input channels and output channels are both n, we ï¬ rst arrange the input and output channels as an s-dimensional tensor [d1, d2, ..., ds]. 8 [[ a=»: (3) i=1 Each output channel is only connected to its local neighbors in the tensor space rather than all input channels.
1608.04337#9
1608.04337#11
1608.04337
[ "1502.03167" ]
1608.04337#11
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
The size of Intra-channel Projet ecw (a) 2D Topology Figure 3. Illustration of Spatial â Bottleneckâ Framework In this section, we introduce a spatial â bottleneckâ struc- ture that reduces the amount of computation without de- creasing either the spatial resolution or the number of chan- nels by exploiting the spatial redundancy of the input. Consider the 3D input data I in Rhà wà n, we ï¬
1608.04337#10
1608.04337#12
1608.04337
[ "1502.03167" ]
1608.04337#12
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
rst apply a single intra-channel convolution to each input channel as was introduced in Section 2.2. A k à k kernel is convolved with each input channel with stride k, so that the output k à w dimension is reduced to R h k à n. Then a linear projection layer is applied. Finally, We perform a k à k intra-channel deconvolution with stride k to recover the spatial resolution. Figure. 3 illustrates the proposed spatial â bottleneckâ (b) 3D Topology Figure 2. 2D &3D topology for input and output.
1608.04337#11
1608.04337#13
1608.04337
[ "1502.03167" ]
1608.04337#13
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
the local neighborhood is deï¬ ned by another s-dimensional tensor, [c1, c2, ..., cs], and the total number of neighbors for each output channel is s Il G=c (4) i=1 Algorithm 3: Convolutional Layer with Topological Subdivisioning Input: I ¢ Râ *<exr Parameter: []_, dj = n;c¢; < di, Vi = 1...s; K â ¬ The complexity of the proposed topologically subdivi- sioned convolutional layers compared to the standard con- volutional layers can be simply measured by c n . Figure. 2 illustrate the 2D and 3D topological subdivisioning be- tween the input channels and the output channels. A formal description of this layer is presented in Algorithm 3. i=1 di = n; ci â ¤ di, â i = 1...s; K â Rkà kà d1à ..à dsà c1à ...à cs Intermediate Data: Ë I â R(h+kâ 1)à (w+kâ 1)à n,Ë I â R(h+kâ 1)à (w+kâ 1)à d1à ...à ds Output: O â Rhà wà d1à ...à ds Ë I = zero-padding(I, kâ 1 2 ) Rearrange Ë I to Ë I for y = 1 to h, x = 1 to w, j1 = 1to d1, ... js = 1to ds do // Topological Subdivisioning When k = 1, the algorithm is suitable for the linear pro- jection layer, and can be directly embedded into Algorithm 2 to further reduce the complexity of the SIC layer. # 2.4. Spatial â Bottleneckâ Structure
1608.04337#12
1608.04337#14
1608.04337
[ "1502.03167" ]
1608.04337#14
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
In the design of traditional CNN models, there has al- ways been a trade-off between the spatial dimensions and the number of channels. While high spatial resolution is necessary to preserve detailed local information, large num- ber of channels produce high dimensional feature spaces and learn more complex representations.The complexity of one convolutional layer is determined by the product of these two factors. To maintain an acceptable complexity, the spatial dimensions are reduced by max pooling or stride convolution while the number of channels are increased. Oly, &, jis 5 i) = ->>. Dy S K(u, 0, jays Js5 tty +s ts): h=1 i,=lujv=1 I(ytu-latv-l, (j1 + 1 â 2)%di +1, (is + is â 2)%ds +1)
1608.04337#13
1608.04337#15
1608.04337
[ "1502.03167" ]
1608.04337#15
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
On the other hand, the adjacent pixels in the input of each convolutional layers are correlated, in a similar fash- ion to the image domain, especially when the spatial res- olution is high. While reducing the resolution by simple sub-sampling will obviously lead to a loss of information, such correlation presents considerable redundancy that can be taken advantage of. # end Stage Output 1082 1 2 362 A B C (7, 64)2 3 Ã 3 max pooling , stride 3 (1, 128) D E (3, 128) Ã 2 [3, 4, 128] Ã 2 < 3, 128 > Ã 4 < 5, 128 > Ã 4 < 3, 128 > Ã 6 3 182 2 Ã 2 max pooling , stride 2 (1, 256) (3, 256) Ã 2 [3, 4, 256] Ã 2 < 3, 256 > Ã 4 < 5, 256 > Ã 4 < 3, 256 > Ã 6 4 62 3 Ã 3 max pooling , stride 3 (1, 512) (3, 512) Ã 2 [3, 4, 512] Ã 2 < 3, 512 > Ã 4 < 5, 512 > Ã 4 < 3, 512 > Ã 6 12 (1, 1024) 6 Ã 6 average pooling, stride 6 fully connected, 2048 fully connected, 1000 softmax
1608.04337#14
1608.04337#16
1608.04337
[ "1502.03167" ]
1608.04337#16
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
Table 1. Conï¬ gurations of baseline models and models with proposed SIC layers . For each convolutional layer, we use numbers in brackets to represent its conï¬ guration. k denotes the kernel size. n is the number of output channels. Different types of bracket correspond to different convolutional layer. (k, n) is a typical standard convolutional layer. [k, b, n] denotes an unraveled convolutional layer with b ï¬ lters for each input channel. < k, n > represents our SIC layer. The number after the brackets indicates the times that the layer is repeated in each stage.
1608.04337#15
1608.04337#17
1608.04337
[ "1502.03167" ]
1608.04337#17
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
framework. The spatial resolution of the data is ï¬ rst re- duced, then expanded, forming a bottleneck structure. In this 3-phase structure, the linear projection phase , which consumes most of the computation, is k2 times more efï¬ - cient than plain linear projection on the original input. The intra-channel convolution and deconvolution phases learn to capture the local correlation of adjacent pixels, while main- taining the spatial resolution of the output. Stage Intra-channel Convolution Linear Projection 4 3 2 6.6% 1.7% 3.4% 93.4% 96.6% 98.3%
1608.04337#16
1608.04337#18
1608.04337
[ "1502.03167" ]
1608.04337#18
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
Table 2. Distribution of the computation in the SIC layer of Model C. The intra-channel convolution generally consumes less than 10% of total computation, and its proportion decreases when the number of channels increases. # 3. Experiments We evaluate the performance of our method on the Im- ageNet LSVRC 2012 dataset, which contains 1000 cate- gories, with 1.2M training images, 50K validation images, and 100K test images. We use Torch to train the CNN mod- els in our framework. Our method is implemented with CUDA and Lua based on the Torch platform.
1608.04337#17
1608.04337#19
1608.04337
[ "1502.03167" ]
1608.04337#19
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
The images are ï¬ rst resized to 256 à 256, then randomly cropped into 221 à 221 and ï¬ ipped horizontally while training. Batch normalization [3] is placed after each convolutional layer and before the ReLU layer. We also adopt the dropout [15] strategy with a ratio of 0.2 during training. Standard stochastic gradient descent with mini-batch containing 256 images is used to train the model. We start the learning rate from 0.1 and divide it by a factor of 10 every 30 epochs. Each model is trained for 100 epochs. For batch normal- ization, we use exponential moving average to calculate the batch statistics as is implemented in CuDNN [16]. The code is run on a server with 4 Pascal Titan X GPU. For all the models evaluated below, the top-1 and top-5 error of valida- tion set with central cropping is reported. We evaluate the performance and efï¬ ciency of a series of models designed using the proposed efï¬ cient convolutional layer. To make cross reference easier and help the readers keep track of all the models, each model is indexed with a capital letter. We compare our method with a baseline CNN model that is built from standard convolutional layers. The details of the baseline models are given in Table 1. The convolutional layers are divided into stages according to their spatial di- mensions. Inside each stage, the convolutional kernels are performed with paddings so that the output has the same spatial dimensions as the input. Across the stages, the spa- tial dimensions are reduced by max pooling and the num- ber of channels are doubled by 1 à 1 convolutional layer. One fully connected layer with dropout is added before the logistic regression layer for ï¬
1608.04337#18
1608.04337#20
1608.04337
[ "1502.03167" ]
1608.04337#20
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
nal classiï¬ cation. Residual learning is added after every convolutional layer with same number of input and output channels. We evaluate the performance of our method by substitut- ing the standard convolutional layers in the baseline mod- els with the proposed Single Intra-Channel Convolutional (SIC) layers. We leave the 7 à 7 convolutional layer in the ï¬ rst stage and the 1 à 1 convolutional layers across stages the same, and only substitute the 3 à 3 convolutional layers. Model A B C D E kernel size 3 3 3 5 3 2 2 4 4 6 30.67% 30.69% 29.78% 29.23% 28.83% 11.24% 11.27% 10.78% 10.48% 9.88% 1 Ë 4/9 Ë 2/9 Ë 2/9 Ë 1/3 Table 3.
1608.04337#19
1608.04337#21
1608.04337
[ "1502.03167" ]
1608.04337#21
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
Top-1 & Top-5 error and complexity per stage of model A to E. The models with proposed design (model C, D, E)demonstrate signiï¬ cantly better accuracy / complexity ratio than the baseline model. In the following sections, the relative complexities are also measured with regards to these layers. convolutional layers in the baseline model, so the overall complexity per stage is reduced by a factor of 2. # 3.1. Single Intra-Channel Convolutional Layer We ï¬ rst substitute the standard convolutional layer with the unraveled convolution conï¬ guration in model B. Each input channel is convolved with 4 ï¬ lters, so that the com- plexity of B is approximately 4 9 of the baseline model A. In model C , we use two SIC layers to replace one standard convolutional layer. Even though our model C has more layers than the baseline model A, its complexity is only 2 9 of model A. In model E, we increase the number of SIC layers from 4 in model C to 6 in model E. The complexity of model E is only 1 3 of the baseline. Due to the extremely low complexity of the SIC layer, we can easily increase the model depth without too much increase of the computation. Table. 2 lists the distribution of computation between the intra-channel convolution and linear channel projection of each SIC layer in model C. The intra-channel convolution generally consumes less than 10% of the total layer com- putation. Thanks to this advantage, we can utilize a larger kernel size with only a small sacriï¬
1608.04337#20
1608.04337#22
1608.04337
[ "1502.03167" ]
1608.04337#22
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
ce of efï¬ ciency. Model D is obtained by setting the kernel size of model C to 5. Table 3 lists the top-1 and top-5 errors and the complex- ity of models from A to E. Comparing model B and A, with same number of layers, model B can match the accuracy of model A with less than half computation. When comparing the SIC based model C with model B, model C reduces the top-1 error by 1% with half complexity.
1608.04337#21
1608.04337#23
1608.04337
[ "1502.03167" ]
1608.04337#23
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
This veriï¬ es the superior efï¬ ciency of the proposed SIC layer. With 5 à 5 kernels, model E obtains 0.5% accuracy gain with as low as 5% increase of complexity on average. This demonstrates that increasing kernel size in SIC layer provides us another choice of improving the accuracy/complexity ratio. # 3.2. Topological Subdivisioning We ï¬ rst compare the performance of two different topo- logical conï¬ gurations against the baseline model. Model F adopts 2D topology and ci = di/2 for both dimensions, which leads to a reduction of complexity by a factor of 4. In Model G, we use 3D topology and set ci and di, so that the complexity is reduced by a factor of 4.27. The details of the network conï¬ guration are listed in Table 4. The num- ber of topological layers is twice the number of standard Stage 2 3 4 #Channels 128 256 512 2D topology d1 à d2 c1 à c2 8 à 16 4 à 8 16 à 16 8 à 8 16 à 32 8 à 16 3D topology d1 à d2 à d3 c1 à c2 à c3 4 à 8 à 4 2 à 5 à 3 8 à 8 à 4 4 à 5 à 3 8 à 8 à 8 4 à 5 à 6 Table 4. Conï¬ gurations of model F and G that use 2D and 3D topological subdivisioning. di and ci stand for the tensor and neighbor dimensions in Algorithm 3. They are designed so that the complexity is reduced by (approximately for 3D) a factor of 4. As a comparison, we also train a model H using the straightforward grouping strategy introduced in [1]. Both the input and output channels are divided into 4 groups. The output channels in each group are only dependent on the in- put channels in the corresponding group. The complexity is also reduced 4 times in this manner. Table 5 lists the top-1 & top-5 error rate and complexities of model F to H. Both the 2D and the 3D topology models outperform the grouping method with lower error rate while maintaining the same complexity.
1608.04337#22
1608.04337#24
1608.04337
[ "1502.03167" ]
1608.04337#24
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
When compared with the baseline model, both of the two topology models achieve similar top-1 and top-5 error rate with half the computation. Finally, we apply the topological subdivisioning to the SIC layer in model I. We choose 2D topology based on the In model I, there are 8 convolutional results in Table 5. layers for each stage, due to the layer doubling caused by both the SIC layer and the topological subdivisioning. The complexity of each layer is, however, approximately as low as 1 36 of a standard 3 à 3 convolutional layer. Compared to the baseline model, 2D topology together with SIC layer achieves similar error rate while being 9 times faster. # 3.3. Spatial â Bottleneckâ Structure In our evaluation of layers with spatial â bottleneckâ structure, both the kernel size and the stride of the in- channel convolution and deconvolution is set to 2. The com- plexity of such a conï¬ guration is a quarter of a SIC layer. Model Methods Baseline Grouping 2D Top 3D Top SIC+2D A H F G I Top-5 Top-1 30.67% 11.24% 31.23% 11.73% 30.53% 11.28% 30.69% 11.38% 30.78% 11.29% Complexity 1 Ë 1/2 Ë 1/2 Ë 15/32 Ë 1/9 Table 5. Top-1&Top-5 error rate and complexity of topology mod- els and grouping model. Both model J and model K are modiï¬ ed from model C by replacing SIC layers with spatial â bottleneckâ layers. One SIC layer is substituted with two Spatial â Bottleneckâ lay- ers, the ï¬
1608.04337#23
1608.04337#25
1608.04337
[ "1502.03167" ]
1608.04337#25
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
rst one with no padding and the second one with one pixel padding, leading to a 50% complexity reduction. In model J, every other SIC layer is substituted; In model K, all SIC layers are substituted. Table 6 compares their performance with the baseline model and SIC based model. Compared to the SIC model C, model J reduces the com- plexity by 25% with no loss of accuracy; model K reduces the complexity by 50% with a slight drop of accuracy. Com- pared to the baseline model A, model K achieves 9 times speedup with similar accuracy. Model A C J K #layers Top-1 err. Top-5 err. Complexity 2 4 6 8 30.67% 29.78% 29.72% 30.78% 11.24% 10.78% 10.66% 11. 34% 1 Ë 2/9 Ë 1/6 Ë 1/9 Table 6.
1608.04337#24
1608.04337#26
1608.04337
[ "1502.03167" ]
1608.04337#26
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
Top-1&Top-5 error rate and complexity of SIC layer with spatial â bottleneckâ structure. # 3.4. Comparison with standard CNN models In this section, we increase the depth of our models to compare with recent state-of-the-art CNN models. To go deeper but without increasing too much complexity, we adopt the channel-wise bottleneck structure similar to the one introduced in [5]. In each channel-wise bottleneck structure, the number of channels are ï¬ rst reduced by half by the ï¬ rst layer, then recovered by the second layer. Such a two-layer bottleneck structure has almost the same com- plexity to single layer with the same input and output chan- nels, thus increase the overall depth of the network. We gradually increase the number of SIC layers with channel-wise bottleneck structure in each stage from 8 to 40, and compare their complexity to recent CNN models with similar accuracies. Model L , M, N and O correspond to the number of layers of 8, 12, 24, and 40, respectively. Due to training memory limitation, only the SIC layer is used in models in this section. While model L and M have the same spatial dimensions and stage structures as in Table 1, model N and O adopt the same structure as in [5]. They have different pooling strides and one more stages right af- ter the ï¬ rst 7 à 7 convolutional layer. The detailed model < 75 °° * a * eS |s = 8â » 5 3 2 % Alexnext = 65 HM Googlenet 2 D> ResNet-18 F a ResNet-34 we ResNet-50 @ ResNet-101 60 Our model L * A Our model M @ Our model N 55 @ Our model O 0 1000 2000 3000 4000 5000 6000 7000 8000 # Multiplications(10°) Figure 4. Comparing top-1 accuracy and complexity between our model and several previous work
1608.04337#25
1608.04337#27
1608.04337
[ "1502.03167" ]
1608.04337#27
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
conï¬ gurations are put in the supplemental materials. Figure 4 compares the accuracy and complexity of our model from L to O with several previous works. Table 7 lists the detailed results. Figure 4 provides a visual compar- ison in the form of scattered plot. The red marks in the ï¬ g- ure represent our models. All of our models demonstrate re- markably lower complexity while being as accurate. Com- pared to VGG, Resnet-34, Resnet-50 and Resnet-101 mod- els, our models are 42Ã
1608.04337#26
1608.04337#28
1608.04337
[ "1502.03167" ]
1608.04337#28
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
, 7.3à , 4.5à , 6.5à more efï¬ cient respectively with similar or lower top-1 or top-5 error. # 3.5. Visualization of ï¬ lters Given the exceptionally good performance of the pro- posed methods, one might wonder what type of kernels are actually learned and how they compare with the ones in traditional convolutional layers. We randomly chose some kernels in the single intra-channel convolutional layers and the traditional convolutional layers, and visualize them side by side in Figure 5 to make an intuitive comparison.
1608.04337#27
1608.04337#29
1608.04337
[ "1502.03167" ]
1608.04337#29
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
Both 3 à 3 kernels and 5 à 5 kernels are shown in the ï¬ gure. The kernels learned by the proposed method demonstrate much higher level of regularized structure, while the kernels in standard convolutional layers exhibit more randomness. We attribute this to the stronger regularization caused by the reduction of number of ï¬ lters. # 3.6. Discussion on implementation details In both SIC layer and spatial â bottleneckâ structure , most of the computation is consumed by the linear channel projection, which is basically a matrix multiplication. The 2D spatial convolution in each channel has similar complex- ity to a max pooling layer. Memory access takes most run- ning time due to low amount of computation.
1608.04337#28
1608.04337#30
1608.04337
[ "1502.03167" ]
1608.04337#30
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
The efï¬ ciency of our CUDA based implementation is similar to the open source libraries like Caffe and Torch. We believe higher ef- ï¬ ciency can be easily achieved with an expert-level GPU Model AlexNet GoogleNet ResNet 18 VGG Our Model L ResNet 34 Top-1 err. Top-5 err. 18.2% 10.07% 10.76% 9.9% 9.9% 8.74% Our Model M 27.07% 8.93% 7.8% 7.58% 7.1% 7.12% 42.5% 31.5% 30.43% 28.5% 28.29% 26.73% ResNet 50 Our Model N ResNet 101 Our Model O 24.7% 24.76% 23.6% 23.99% # of Multiplications 725M 1600M 1800M 16000M 381M 3600M 490M 3800M 845M 7600M 1172M
1608.04337#29
1608.04337#31
1608.04337
[ "1502.03167" ]
1608.04337#31
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
Table 7. Top-1 and Top-5 error rate of single-crop testing with single model, number of multiplication of our model and several previous work. The numbers in this table are generated with single model and center-crop. For AlexNet and GoogLeNet, the top-1 error is missing in original paper and we use the number of Caffeâ s implementation[17]. For ResNet-34, we use the number with Facebookâ s implementation[18]. BRUYVILUE DEENA eo BAFAUSRER EAPAOREe ed ee ELLER. Q985959560o ee ee Pe ERVCSRlk GA Aawaoe Ste GI WORE AE WWGVlkhl CRAGoaoF WEEE AIE AGB Sar SOREUEEE SSO TEIG BONAIRE OTe tkeea SAC EAP RSE Genoa Ed 0 a A PERM Aon Ce ee Lo | fd eel DOBRO Eo oe | Pd | | DEBRA e Pe EE a fated fal # eel + Ln BOOB oRE aS fame a | AHO pate foe | | | DHROR SEP BEBSEoo Lo | fd eel DOBRO Eo oe | Pd | | DEBRA e Pe EE a fated fal # eel + Ln BOOB oRE aS fame a | AHO pate foe | | | DHROR SEP BEBSEoo DEENA eo EAPAOREe Q985959560o ee Pe GA Aawaoe WORE AE CRAGoaoF AGB Sar SSO TEIG OTe tkeea Genoa Ed 0 a A PERM Aon Ce ee BRUYVILUE BAFAUSRER ed ee ELLER. ee ERVCSRlk Ste GI WWGVlkhl WEEE AIE SOREUEEE BONAIRE SAC EAP RSE (a) 3 Ã 3 standard convolutional layer (b) 3 Ã 3 single intra-channel convolutional layer (c) 5 Ã 5 standard convolutional layer (d) 5 Ã 5 single intra-channel convolutional layer Figure 5. Visualization of convolutional kernels. We compare the 3 Ã 3 and 5 Ã 5 kernels that are learned by the proposed single intra- channel convolutional layer and the standard convolutional layer.
1608.04337#30
1608.04337#32
1608.04337
[ "1502.03167" ]
1608.04337#32
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
The kernels from single intra-channel convolution exhibit a higher level of regularity in structure. implementation like in CuDNN. The topological subdivi- sioning layer resembles the structure of 2D and 3D convo- lution.Unlike the sparsity based methods, the regular con- nection pattern from topological subdivisioning makes ef- ï¬ cient implementation possible. Currently, our implemen- tation simply discards all the non-connected weights in a convolutional layer.
1608.04337#31
1608.04337#33
1608.04337
[ "1502.03167" ]
1608.04337#33
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
# 4. Conclusion This work introduces a novel design of efï¬ cient convo- lutional layer in deep CNN that involves three speciï¬ c im- provements: (i) a single intra-channel convolutional (SIC) layer ; (ii) a topological subdivision scheme; and (iii) a spa- tial â bottleneckâ structure. As we demonstrated, they are all powerful schemes in different ways to yield a new design of the convolutional layer that has higher efï¬ ciency, while achieving equal or better accuracy compared to classical designs. While the numbers of input and output channels remain the same as in the classical models, both the con- volutions and the number of connections can be optimized against accuracy in our model - (i) reduces complexity by unraveling convolution, (ii) uses topology to make connec- tions in the convolutional layer sparse, while maintaining local regularity and (iii) uses a conv-deconv bottleneck to reduce convolution while maintaining resolution. Although the CNN have been exceptionally successful regarding the recognition accuracy, it is still not clear what architecture is optimal and learns the visual information most effectively. The methods presented herein attempt to answer this ques- tion by focusing on improving the efï¬ ciency of the convolu- tional layer. We believe this work will inspire more compre- hensive studies in the direction of optimizing convolutional layers in deep CNN.
1608.04337#32
1608.04337#34
1608.04337
[ "1502.03167" ]
1608.04337#34
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
# References [1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. Imagenet classiï¬ cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â 1105, 2012. 1, 6 [2] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Ser- manet, Scott Reed, Dragomir Anguelov, Dumitru Er- han, Vincent Vanhoucke, and Andrew Rabinovich. arXiv preprint Going deeper with convolutions. arXiv:1409.4842, 2014. 1 Batch nor- malization: Accelerating deep network training by arXiv preprint reducing internal covariate shift. arXiv:1502.03167, 2015. 1, 5
1608.04337#33
1608.04337#35
1608.04337
[ "1502.03167" ]
1608.04337#35
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
[4] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015. 1 [5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. 1, 7 [6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchi- cal image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248â 255. IEEE, 2009. 1 [7] Emily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efï¬
1608.04337#34
1608.04337#36
1608.04337
[ "1502.03167" ]
1608.04337#36
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
cient evaluation. In Advances in Neural Information Processing Sys- tems, 2014. 1 [8] Max Jaderberg, Andrea Vedaldi, and Andrew Zisser- man. Speeding up convolutional neural networks with low rank expansions. In Proc. BMVC, 2014. 1 [9] Xiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, and Jian Sun. Efï¬ cient and accurate approxima- tions of nonlinear convolutional networks. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1984â
1608.04337#35
1608.04337#37
1608.04337
[ "1502.03167" ]
1608.04337#37
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
1992, 2015. 1 [10] Yani Ioannou, Duncan Robertson, Jamie Shotton, Roberto Cipolla, and Antonio Criminisi. Training cnns with low-rank ï¬ lters for efï¬ cient image classi- ï¬ cation. arXiv preprint arXiv:1511.06744, 2015. 1 [11] Cheng Tai, Tong Xiao, Xiaogang Wang, et al. Convo- lutional neural networks with low-rank regularization. arXiv preprint arXiv:1511.06067, 2015. 1 [12] Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse convolutional In Proceedings of the IEEE Con- neural networks. ference on Computer Vision and Pattern Recognition, pages 806â 814, 2015. 1, 3 [13] Song Han, Huizi Mao, and William J Dally.
1608.04337#36
1608.04337#38
1608.04337
[ "1502.03167" ]
1608.04337#38
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2, 2015. 1 [14] Aapo Hyv¨arinen, Patrik Hoyer, and Mika Inki. To- pographic independent component analysis. Neural computation, 13(7):1527â 1558, 2001. 3 [15] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from over- ï¬
1608.04337#37
1608.04337#39
1608.04337
[ "1502.03167" ]
1608.04337#39
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
tting. The Journal of Machine Learning Research, 15(1):1929â 1958, 2014. 5 [16] Sharan Chetlur, Cliff Woolley, Philippe Vandermer- sch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. cudnn: Efï¬ cient primitives for deep learning. arXiv preprint arXiv:1410.0759, 2014. 5 [17] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Jonathan Long, Ross Girshick, Sergio Karayev, Guadarrama, and Trevor Darrell.
1608.04337#38
1608.04337#40
1608.04337
[ "1502.03167" ]
1608.04337#40
Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure
Caffe: Convolu- tional architecture for fast feature embedding. In Proceedings of the ACM International Conference on Multimedia, pages 675â 678. ACM, 2014. 8 [18] Sam Gross and Michael Wilber. Resnet training in https://github.com/charlespwd/ torch. project-title, 2016. 8
1608.04337#39
1608.04337
[ "1502.03167" ]
1608.03983#0
SGDR: Stochastic Gradient Descent with Warm Restarts
7 1 0 2 y a M 3 ] G L . s c [ 5 v 3 8 9 3 0 . 8 0 6 1 : v i X r a Published as a conference paper at ICLR 2017 # SGDR: STOCHASTIC GRADIENT DESCENT WITH WARM RESTARTS Ilya Loshchilov & Frank Hutter University of Freiburg Freiburg, Germany, {ilya,fh}@cs.uni-freiburg.de # ABSTRACT
1608.03983#1
1608.03983
[ "1703.05051" ]
1608.03983#1
SGDR: Stochastic Gradient Descent with Warm Restarts
Restart techniques are common in gradient-free optimization to deal with multi- modal functions. Partial warm restarts are also gaining popularity in gradient- based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a sim- ple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its per- formance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at https://github.com/loshchil/SGDR # INTRODUCTION
1608.03983#0
1608.03983#2
1608.03983
[ "1703.05051" ]
1608.03983#2
SGDR: Stochastic Gradient Descent with Warm Restarts
Deep neural networks (DNNs) are currently the best-performing method for many classiï¬ cation problems, such as object recognition from images (Krizhevsky et al., 2012a; Donahue et al., 2014) or speech recognition from audio data (Deng et al., 2013). Their training on large datasets (where DNNs perform particularly well) is the main computational bottleneck: it often requires several days, even on high-performance GPUs, and any speedups would be of substantial value. The training of a DNN with n free parameters can be formulated as the problem of minimizing a function f :
1608.03983#1
1608.03983#3
1608.03983
[ "1703.05051" ]
1608.03983#3
SGDR: Stochastic Gradient Descent with Warm Restarts
IRn â IR. The commonly used procedure to optimize f is to iteratively adjust xt â IRn (the parameter vector at time step t) using gradient information â ft(xt) obtained on a relatively small t-th batch of b datapoints. The Stochastic Gradient Descent (SGD) procedure then becomes an extension of the Gradient Descent (GD) to stochastic optimization of f as follows: xt+1 = xt â ηtâ ft(xt), (1) where ηt is a learning rate. One would like to consider second-order information xt+1 = xt â ηtHâ 1 t â ft(xt), (2) but this is often infeasible since the computation and storage of the inverse Hessian Hâ 1 is in- tractable for large n. The usual way to deal with this problem by using limited-memory quasi- Newton methods such as L-BFGS (Liu & Nocedal, 1989) is not currently in favor in deep learning, not the least due to (i) the stochasticity of â ft(xt), (ii) ill-conditioning of f and (iii) the presence of saddle points as a result of the hierarchical geometric structure of the parameter space (Fukumizu & Amari, 2000). Despite some recent progress in understanding and addressing the latter problems (Bordes et al., 2009; Dauphin et al., 2014; Choromanska et al., 2014; Dauphin et al., 2015), state-of- the-art optimization techniques attempt to approximate the inverse Hessian in a reduced way, e.g., by considering only its diagonal to achieve adaptive learning rates. AdaDelta (Zeiler, 2012) and Adam (Kingma & Ba, 2014) are notable examples of such methods.
1608.03983#2
1608.03983#4
1608.03983
[ "1703.05051" ]
1608.03983#4
SGDR: Stochastic Gradient Descent with Warm Restarts
1 Published as a conference paper at ICLR 2017 Learning rate schedule 10 â O-â Default, Ir=0.1 Eb Default, ir=0.05 wb - B= T= 50, Tut T,=100,T â j ° mult 2 | \ T= 200, © 10 pe Th 1 Tha > mul â ¬ 3 A T= 10, Traut § 10 eal 10° i ] ! fi fi f } 20 40 60 80 100 120 140 160 180 200 Epochs 1 =1 Ty = 1 = 2 = 2 Figure 1: Alternative schedule schemes of learning rate ηt over batch index t: default schemes with η0 = 0.1 (blue line) and η0 = 0.05 (red line) as used by Zagoruyko & Komodakis (2016); warm restarts simulated every T0 = 50 (green line), T0 = 100 (black line) and T0 = 200 (grey line) epochs with ηt decaying during i-th run from ηi min = 0 according to eq. (5); warm restarts starting from epoch T0 = 1 (dark green line) and T0 = 10 (magenta line) with doubling (Tmult = 2) periods Ti at every new warm restart. Intriguingly enough, the current state-of-the-art results on CIFAR-10, CIFAR-100, SVHN, Ima- geNet, PASCAL VOC and MS COCO datasets were obtained by Residual Neural Networks (He et al., 2015; Huang et al., 2016c; He et al., 2016; Zagoruyko & Komodakis, 2016) trained with- out the use of advanced methods such as AdaDelta and Adam. Instead, they simply use SGD with momentum 1: vt+1 = µtvt â ηtâ ft(xt), xt+1 = xt + vt+1, Vexr = ee â MV file), (3) Xeq1 = Xe + Vega, (4) (3) (4)
1608.03983#3
1608.03983#5
1608.03983
[ "1703.05051" ]
1608.03983#5
SGDR: Stochastic Gradient Descent with Warm Restarts
where vt is a velocity vector initially set to 0, ηt is a decreasing learning rate and µt is a momentum rate which deï¬ nes the trade-off between the current and past observations of â ft(xt). The main difï¬ culty in training a DNN is then associated with the scheduling of the learning rate and the amount of L2 weight decay regularization employed. A common learning rate schedule is to use a constant learning rate and divide it by a ï¬ xed constant in (approximately) regular intervals. The blue line in Figure 1 shows an example of such a schedule, as used by Zagoruyko & Komodakis (2016) to obtain the state-of-the-art results on CIFAR-10, CIFAR-100 and SVHN datasets. In this paper, we propose to periodically simulate warm restarts of SGD, where in each restart the learning rate is initialized to some value and is scheduled to decrease. Four different instantiations of this new learning rate schedule are visualized in Figure 1. Our empirical results suggest that SGD with warm restarts requires 2à to 4à fewer epochs than the currently-used learning rate schedule schemes to achieve comparable or even better results. Furthermore, combining the networks ob- tained right before restarts in an ensemble following the approach proposed by Huang et al. (2016a) improves our results further to 3.14% for CIFAR-10 and 16.21% for CIFAR-100. We also demon- strate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset.
1608.03983#4
1608.03983#6
1608.03983
[ "1703.05051" ]
1608.03983#6
SGDR: Stochastic Gradient Descent with Warm Restarts
1More speciï¬ cally, they employ Nesterovâ s momentum (Nesterov, 1983; 2013) 2 Published as a conference paper at ICLR 2017 2 RELATED WORK 2.1 RESTARTS IN GRADIENT-FREE OPTIMIZATION When optimizing multimodal functions one may want to ï¬ nd all global and local optima. The tractability of this task depends on the landscape of the function at hand and the budget of func- tion evaluations. Gradient-free optimization approaches based on niching methods (Preuss, 2015) usually can deal with this task by covering the search space with dynamically allocated niches of local optimizers. However, these methods usually work only for relatively small search spaces, e.g., n < 10, and do not scale up due to the curse of dimensionality (Preuss, 2010). Instead, the current state-of-the-art gradient-free optimizers employ various restart mechanisms (Hansen, 2009; Loshchilov et al., 2012). One way to deal with multimodal functions is to iteratively sample a large number λ of candidate solutions, make a step towards better solutions and slowly shape the sampling distribution to maximize the likelihood of successful steps to appear again (Hansen & Kern, 2004). The larger the λ, the more global search is performed requiring more function evaluations. In order to achieve good anytime performance, it is common to start with a small λ and increase it (e.g., by doubling) after each restart. This approach works best on multimodal functions with a global funnel structure and also improves the results on ill-conditioned problems where numerical issues might lead to premature convergence when λ is small (Hansen, 2009). 2.2 RESTARTS IN GRADIENT-BASED OPTIMIZATION Gradient-based optimization algorithms such as BFGS can also perform restarts to deal with mul- timodal functions (Ros, 2009). In large-scale settings when the usual number of variables n is on the order of 103 â 109, the availability of gradient information provides a speedup of a factor of n w.r.t. gradient-free approaches. Warm restarts are usually employed to improve the convergence rate rather than to deal with multimodality: often it is sufï¬
1608.03983#5
1608.03983#7
1608.03983
[ "1703.05051" ]
1608.03983#7
SGDR: Stochastic Gradient Descent with Warm Restarts
cient to approach any local optimum to a given precision and in many cases the problem at hand is unimodal. Fletcher & Reeves (1964) proposed to ï¬ esh the history of conjugate gradient method every n or (n + 1) iterations. Powell (1977) proposed to check whether enough orthogonality between â f (xtâ 1) and â f (xt) has been lost to warrant another warm restart. Recently, Oâ Donoghue & Candes (2012) noted that the iterates of accelerated gradient schemes proposed by Nesterov (1983; 2013) exhibit a periodic behavior if momentum is overused. The period of the oscillations is proportional to the square root of the local condition number of the (smooth convex) objective function. The authors showed that ï¬ xed warm restarts of the algorithm with a period proportional to the conditional number achieves the optimal linear convergence rate of the original accelerated gradient scheme. Since the condition number is an unknown parameter and its value may vary during the search, they proposed two adaptive warm restart techniques (Oâ Donoghue & Candes, 2012): The function scheme restarts whenever the objective function increases. â ¢ The gradient scheme restarts whenever the angle between the momentum term and the negative gradient is obtuse, i.e, when the momentum seems to be taking us in a bad direc- tion, as measured by the negative gradient at that point. This scheme resembles the one of Powell (1977) for the conjugate gradient method.
1608.03983#6
1608.03983#8
1608.03983
[ "1703.05051" ]
1608.03983#8
SGDR: Stochastic Gradient Descent with Warm Restarts
Oâ Donoghue & Candes (2012) showed (and it was conï¬ rmed in a set of follow-up works) that these simple schemes provide an acceleration on smooth functions and can be adjusted to accelerate state- of-the-art methods such as FISTA on nonsmooth functions. Smith (2015; 2016) recently introduced cyclical learning rates for deep learning, his approach is closely-related to our approach in its spirit and formulation but does not focus on restarts. Yang & Lin (2015) showed that Stochastic subGradient Descent with restarts can achieve a linear convergence rate for a class of non-smooth and non-strongly convex optimization problems where the epigraph of the objective function is a polyhedron. In contrast to our work, they never increase the learning rate to perform restarts but decrease it geometrically at each epoch. To perform restarts, they periodically reset the current solution to the averaged solution from the previous epoch.
1608.03983#7
1608.03983#9
1608.03983
[ "1703.05051" ]
1608.03983#9
SGDR: Stochastic Gradient Descent with Warm Restarts
3 Published as a conference paper at ICLR 2017 # 3 STOCHASTIC GRADIENT DESCENT WITH WARM RESTARTS (SGDR) The existing restart techniques can also be used for stochastic gradient descent if the stochasticity is taken into account. Since gradients and loss values can vary widely from one batch of the data to another, one should denoise the incoming information: by considering averaged gradients and losses, e.g., once per epoch, the above-mentioned restart techniques can be used again. In this work, we consider one of the simplest warm restart approaches. We simulate a new warm- started run / restart of SGD once Ti epochs are performed, where i is the index of the run. Impor- tantly, the restarts are not performed from scratch but emulated by increasing the learning rate ηt while the old value of xt is used as an initial solution. The amount of this increase controls to which extent the previously acquired information (e.g., momentum) is used. Within the i-th run, we decay the learning rate with a cosine annealing for each batch as follows: ηt = ηi min + 1 2 (ηi max â ηi min)(1 + cos( Tcur Ti Ï )), (5) where ηi max are ranges for the learning rate, and Tcur accounts for how many epochs have been performed since the last restart. Since Tcur is updated at each batch iteration t, it can take discredited values such as 0.1, 0.2, etc. Thus, ηt = ηi max when t = 0 and Tcur = 0. Once Tcur = Ti, the cos function will output â 1 and thus ηt = ηi min. The decrease of the learning rate is shown in Figure 1 for ï¬ xed Ti = 50, Ti = 100 and Ti = 200; note that the logarithmic axis obfuscates the typical shape of the cosine function. In order to improve anytime performance, we suggest an option to start with an initially small Ti and increase it by a factor of Tmult at every restart (see, e.g., Figure 1 for T0 = 1, Tmult = 2 and T0 = 10, Tmult = 2).
1608.03983#8
1608.03983#10
1608.03983
[ "1703.05051" ]
1608.03983#10
SGDR: Stochastic Gradient Descent with Warm Restarts
It might be of great interest to decrease ηi min at every new restart. However, for the sake of simplicity, here, we keep ηi min the same for every i to reduce the number of hyperparameters involved. Since our simulated warm restarts (the increase of the learning rate) often temporarily worsen per- formance, we do not always use the last xt as our recommendation for the best solution (also called the incumbent solution). While our recommendation during the ï¬ rst run (before the ï¬ rst restart) is indeed the last xt, our recommendation after this is a solution obtained at the end of the last per- formed run at ηt = ηi min.
1608.03983#9
1608.03983#11
1608.03983
[ "1703.05051" ]
1608.03983#11
SGDR: Stochastic Gradient Descent with Warm Restarts
We emphasize that with the help of this strategy, our method does not require a separate validation data set to determine a recommendation. # 4 EXPERIMENTAL RESULTS 4.1 EXPERIMENTAL SETTINGS We consider the problem of training Wide Residual Neural Networks (WRNs; see Zagoruyko & Komodakis (2016) for details) on the CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009). We will use the abbreviation WRN-d-k to denote a WRN with depth d and width k. Zagoruyko & Komodakis (2016) obtained the best results with a WRN-28-10 architecture, i.e., a Residual Neural Network with d = 28 layers and k = 10 times more ï¬ lters per layer than used in the original Residual Neural Networks (He et al., 2015; 2016). The CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009) consist of 32à 32 color images drawn from 10 and 100 classes, respectively, split into 50,000 train and 10,000 test images. For image preprocessing Zagoruyko & Komodakis (2016) performed global contrast normalization and ZCA whitening. For data augmentation they performed horizontal ï¬ ips and random crops from the image padded by 4 pixels on each side, ï¬ lling missing pixels with reï¬ ections of the original image.
1608.03983#10
1608.03983#12
1608.03983
[ "1703.05051" ]
1608.03983#12
SGDR: Stochastic Gradient Descent with Warm Restarts
For training, Zagoruyko & Komodakis (2016) used SGD with Nesterovâ s momentum with initial learning rate set to η0 = 0.1, weight decay to 0.0005, dampening to 0, momentum to 0.9 and minibatch size to 128. The learning rate is dropped by a factor of 0.2 at 60, 120 and 160 epochs, with a total budget of 200 epochs. We reproduce the results of Zagoruyko & Komodakis (2016) with the same settings except that i) we subtract per-pixel mean only and do not use ZCA whitening; ii) we use SGD with momentum as described by eq. (3-4) and not Nesterovâ s momentum.
1608.03983#11
1608.03983#13
1608.03983
[ "1703.05051" ]
1608.03983#13
SGDR: Stochastic Gradient Descent with Warm Restarts
4 Published as a conference paper at ICLR 2017 # WRN-28-10 on CIFAR-10 # WRN-28-10 on CIFAR-100 2 50 Default, Ir=0.1 Default, r=0.05 Ty = 50, Tra = 1 20 +> = 100, te 1 40 = Ty = 200, Try = 4 = = 15 PR T0= Trt =2 = 30 o o 8 10 8 20 F F 10 0 i} 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-10 on CIFAR-10 WRN-28-10 on CIFAR-100 5 21 20.5 45 Test error (%) ES Test error (%) oS a 19 3.5 18.5 3 18 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-20 on CIFAR-10 WRN-28-20 on CIFAR-100 5 21 y 20.5 45 20 Test error (%) ES Test error (%) oS a 19 3.5 18.5 3 18 50 100 150 200 50 100 150 200 Epochs Epochs Figure 2: Test errors on CIFAR-10 (left column) and CIFAR-100 (right column) datasets. Note that for SGDR we only plot the recommended solutions. The top and middle rows show the same results on WRN-28-10, with the middle row zooming into the good performance region of low test error. The bottom row shows performance with a wider network, WRN-28-20. The results of the default learning rate schedules of Zagoruyko & Komodakis (2016) with η0 = 0.1 and η0 = 0.05 are depicted by the blue and red lines, respectively.
1608.03983#12
1608.03983#14
1608.03983
[ "1703.05051" ]
1608.03983#14
SGDR: Stochastic Gradient Descent with Warm Restarts
The schedules of ηt used in SGDR are shown with i) restarts every T0 = 50 epochs (green line); ii) restarts every T0 = 100 epochs (black line); iii) restarts every T0 = 200 epochs (gray line); iv) restarts with doubling (Tmult = 2) periods of restarts starting from the ï¬ rst epoch (T0 = 1, dark green line); and v) restarts with doubling (Tmult = 2) periods of restarts starting from the tenth epoch (T0 = 10, magenta line). The schedule of ηt used by Zagoruyko & Komodakis (2016) is depicted by the blue line in Figure 1. The same schedule but with η0 = 0.05 is depicted by the red line. The schedule of ηt used in SGDR is also shown in Figure 1, with two initial learning rates T0 and two restart doubling periods.
1608.03983#13
1608.03983#15
1608.03983
[ "1703.05051" ]
1608.03983#15
SGDR: Stochastic Gradient Descent with Warm Restarts
5 Published as a conference paper at ICLR 2017 original-ResNet (He et al., 2015) stoc-depth (Huang et al., 2016c) pre-act-ResNet (He et al., 2016) WRN (Zagoruyko & Komodakis, 2016) depth-k 110 1202 110 1202 110 164 1001 16-8 28-10 28-10 28-10 28-10 28-10 28-10 28-10 28-10 28-10 28-20 28-20 28-20 28-20 28-20 28-20 28-20 # runs # params 1.7M mean of 5 10.2M mean of 5 1 run 1.7M 1 run 10.2M med. of 5 1.7M 1.7M med. of 5 10.2M med. of 5 11.0M 36.5M 36.5M 1 run 1 run 1 run 36.5M med. of 5 36.5M med. of 5 36.5M med. of 5 36.5M med. of 5 36.5M med. of 5 36.5M med. of 5 36.5M med. of 5 145.8M med. of 2 145.8M med. of 2 145.8M med. of 2 145.8M med. of 2 145.8M med. of 2 145.8M med. of 2 145.8M med. of 2 CIFAR-10 CIFAR-100 6.43 7.93 5.23 4.91 6.37 5.46 4.62 4.81 4.17 n/a 25.16 27.82 24.58 n/a n/a 24.33 22.71 22.07 20.50 20.04 4.24 4.13 4.17 4.07 3.86 4.09 4.03 4.08 3.96 4.01 3.77 3.66 3.91 3.74 20.33 20.21 19.99 19.87 19.98 19.74 19.58 19.53 19.67 19.28 19.24 19.69 18.90 18.70
1608.03983#14
1608.03983#16
1608.03983
[ "1703.05051" ]
1608.03983#16
SGDR: Stochastic Gradient Descent with Warm Restarts
Table 1: Test errors of different methods on CIFAR-10 and CIFAR-100 with moderate data aug- mentation (ï¬ ip/translation). In the second column k is a widening factor for WRNs. Note that the computational and memory resources used to train all WRN-28-10 are the same. In all other cases they are different, but WRNs are usually faster than original ResNets to achieve the same accuracy (e.g., up to a factor of 8 according to Zagoruyko & Komodakis (2016)). Bold text is used only to highlight better results and is not based on statistical tests (too few runs). 4.2 SINGLE-MODEL RESULTS Table 1 shows that our experiments reproduce the results given by Zagoruyko & Komodakis (2016) for WRN-28-10 both on CIFAR-10 and CIFAR-100.
1608.03983#15
1608.03983#17
1608.03983
[ "1703.05051" ]
1608.03983#17
SGDR: Stochastic Gradient Descent with Warm Restarts
These â defaultâ experiments with η0 = 0.1 and η0 = 0.05 correspond to the blue and red lines in Figure 2. The results for η0 = 0.05 show better performance, and therefore we use η0 = 0.05 in our later experiments. SGDR with T0 = 50, T0 = 100 and T0 = 200 for Tmult = 1 perform warm restarts every 50, 100 and 200 epochs, respectively. A single run of SGD with the schedule given by eq. (5) for T0 = 200 shows the best results suggesting that the original schedule of WRNs might be suboptimal w.r.t. the test error in these settings. However, the same setting with T0 = 200 leads to the worst anytime performance except for the very last epochs. SGDR with T0 = 1, Tmult = 2 and T0 = 10, Tmult = 2 performs its ï¬ rst restart after 1 and 10 epochs, respectively. Then, it doubles the maximum number of epochs for every new restart. The main purpose of this doubling is to reach good test error as soon as possible, i.e., achieve good anytime performance. Figure 2 shows that this is achieved and test errors around 4% on CIFAR-10 and around 20% on CIFAR-100 can be obtained about 2-4 times faster than with the default schedule used by Zagoruyko & Komodakis (2016).
1608.03983#16
1608.03983#18
1608.03983
[ "1703.05051" ]
1608.03983#18
SGDR: Stochastic Gradient Descent with Warm Restarts
6 Published as a conference paper at ICLR 2017 Median test error (%) of ensembles on CIFAR-10 39 38 37 36 35 34 33 3.2 8 16 (N) FS (M) Number of snapshots per run 174.03% 3.63% 1 2 3 4 Number of runs Median test error (%) of ensembles on CIFAR-100 1/19.57% 18.16% Number of snapshots per run (M) nN 2 nN a aS 1 2 8 16 3 4 Number of runs (N) 39 38 37 36 35 34 33 3.2 8 16 (N) FS (M) 1/19.57% 18.16% Number of snapshots per run Number of snapshots per run (M) nN 2 nN a aS 174.03% 3.63% 1 2 3 4 1 2 8 16 3 4 Number of runs Number of runs (N) Figure 3: Test errors of ensemble models built from N runs of SGDR on WRN-28-10 with M model snapshots per run made at epochs 150, 70 and 30 (right before warm restarts of SGDR as suggested by Huang et al. (2016a)). When M =1 (respectively, M =2), we aggregate probabilities of softmax layers of snapshot models at epoch index 150 (respectively, at epoch indexes 150 and 70). N = 1 run of WRN-28-10 with M = 1 snapshot (median of 16 runs) N = 1 run of WRN-28-10 with M = 3 snapshots per run N = 3 runs of WRN-28-10 with M = 3 snapshots per run N = 16 runs of WRN-28-10 with M = 3 snapshots per run 4.03 3.51 3.25 3.14 19.57 17.75 16.64 16.21 # CIFAR-10 CIFAR-100 Table 2: Test errors of ensemble models on CIFAR-10 and CIFAR-100 datasets.
1608.03983#17
1608.03983#19
1608.03983
[ "1703.05051" ]
1608.03983#19
SGDR: Stochastic Gradient Descent with Warm Restarts
Since SGDR achieves good performance faster, it may allow us to train larger networks. We there- fore investigated whether results on CIFAR-10 and CIFAR-100 can be further improved by making WRNs two times wider, i.e., by training WRN-28-20 instead of WRN-28-10. Table 1 shows that the results indeed improved, by about 0.25% on CIFAR-10 and by about 0.5-1.0% on CIFAR-100. While network architecture WRN-28-20 requires roughly three-four times more computation than WRN-28-10, the aggressive learning rate reduction of SGDR nevertheless allowed us to achieve a better error rate in the same time on WRN-28-20 as we spent on 200 epochs of training on WRN- 28-10.
1608.03983#18
1608.03983#20
1608.03983
[ "1703.05051" ]
1608.03983#20
SGDR: Stochastic Gradient Descent with Warm Restarts
Speciï¬ cally, Figure 2 (right middle and right bottom) show that after only 50 epochs, SGDR (even without restarts, using T0 = 50, Tmult = 1) achieved an error rate below 19% (whereas none of the other learning methods performed better than 19.5% on WRN-28-10). We therefore have hope that â by enabling researchers to test new architectures faster â SGDRâ s good anytime performance may also lead to improvements of the state of the art. In a ï¬ nal experiment for SGDR by itself, Figure 7 in the appendix compares SGDR and the de- fault schedule with respect to training and test performance. As the ï¬ gure shows, SGDR optimizes training loss faster than the standard default schedule until about epoch 120. After this, the default schedule overï¬ ts, as can be seen by an increase of the test error both on CIFAR-10 and CIFAR-100 (see, e.g., the right middle plot of Figure 7). In contrast, we only witnessed very mild overï¬ tting for SGDR. 4.3 ENSEMBLE RESULTS Our initial arXiv report on SGDR (Loshchilov & Hutter, 2016) inspired a follow-up study by Huang et al. (2016a) in which the authors suggest to take M snapshots of the models obtained by SGDR (in their paper referred to as cyclical learning rate schedule and cosine annealing cycles) right before M last restarts and to use those to build an ensemble, thereby obtaining ensembles â
1608.03983#19
1608.03983#21
1608.03983
[ "1703.05051" ]
1608.03983#21
SGDR: Stochastic Gradient Descent with Warm Restarts
for freeâ (in contrast to having to perform multiple independent runs). The authors demonstrated new state-of- 7 Published as a conference paper at ICLR 2017 the-art results on CIFAR datasets by making ensembles of DenseNet models (Huang et al., 2016b). Here, we investigate whether their conclusions hold for WRNs used in our study. We used WRN- 28-10 trained by SGDR with T0 = 10, Tmult = 2 as our baseline model. Figure 3 and Table 2 aggregate the results of our study. The original test error of 4.03% on CIFAR-10 and 19.57% on CIFAR-100 (median of 16 runs) can be improved to 3.51% on CIFAR-10 and 17.75% on CIFAR-100 when M = 3 snapshots are taken at epochs 30, 70 and 150: when the learning rate of SGDR with T0 = 10, Tmult = 2 is scheduled to achieve 0 (see Figure 1) and the models are used with uniform weights to build an ensemble. To achieve the same result, one would have to aggregate N = 3 models obtained at epoch 150 of N = 3 independent runs (see N = 3, M = 1 in Figure 3). Thus, the aggregation from snapshots provides a 3-fold speedup in these settings because additional (M > 1-th) snapshots from a single SGDR run are computationally free. Interestingly, aggregation of models from independent runs (when N > 1 and M = 1) does not scale up as well as from M > 1 snapshots of independent runs when the same number of models is considered: the case of N = 3 and M = 3 provides better performance than the cases of M = 1 with N = 18 and N = 21. Not only the number of snapshots M per run but also their origin is crucial. Thus, naively building ensembles from models obtained at last epochs only (i.e., M = 3 snapshots at epochs 148, 149, 150) did not improve the results (i.e., the baseline of M = 1 snapshot at 150) thereby conï¬
1608.03983#20
1608.03983#22
1608.03983
[ "1703.05051" ]
1608.03983#22
SGDR: Stochastic Gradient Descent with Warm Restarts
rming the conclusion of Huang et al. (2016a) that snapshots of SGDR provide a useful diversity of predictions for ensembles. Three runs (N = 3) of SGDR with M = 3 snapshots per run are sufï¬ cient to greatly improve the results to 3.25% on CIFAR-10 and 16.64% on CIFAR-100 outperforming the results of Huang et al. (2016a). By increasing N to 16 one can achieve 3.14% on CIFAR-10 and 16.21% on CIFAR-100. We believe that these results could be further improved by considering better baseline models than WRN-28-10 (e.g., WRN-28-20). 4.4 EXPERIMENTS ON A DATASET OF EEG RECORDINGS To demonstrate the generality of SGDR, we also considered a very different domain: a dataset of electroencephalographic (EEG) recordings of brain activity for classiï¬ cation of actual right and left hand and foot movements of 14 subjects with roughly 1000 trials per subject (Schirrmeister et al., 2017).
1608.03983#21
1608.03983#23
1608.03983
[ "1703.05051" ]
1608.03983#23
SGDR: Stochastic Gradient Descent with Warm Restarts
The best classiï¬ cation results obtained with the original pipeline based on convolutional neu- ral networks designed by Schirrmeister et al. (2017) were used as our reference. First, we compared the baseline learning rate schedule with different settings of the total number of epochs and initial learning rates (see Figure 4). When 30 epochs were considered, we dropped the learning rate by a factor of 10 at epoch indexes 10, 15 and 20. As expected, with more epochs used and a similar (budget proportional) schedule better results can be achieved. Alternatively, one can consider SGDR and get a similar ï¬ nal performance while having a better anytime performance without deï¬ ning the total budget of epochs beforehand. Similarly to our results on the CIFAR datasets, our experiments with the EEG data conï¬ rm that snapshots are useful and the median reference error (about 9%) can be improved i) by 1-2% when model snapshots of a single run are considered, and ii) by 2-3% when model snapshots from both hyperparameter settings are considered.
1608.03983#22
1608.03983#24
1608.03983
[ "1703.05051" ]
1608.03983#24
SGDR: Stochastic Gradient Descent with Warm Restarts
The latter would correspond to N = 2 in Section (4.3). 4.5 PRELIMINARY EXPERIMENTS ON A DOWNSAMPLED IMAGENET DATASET In order to additionally validate our SGDR on a larger dataset, we constructed a downsampled version of the ImageNet dataset [P. Chrabaszcz, I. Loshchilov and F. Hutter. A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets., in preparation]. In contrast to earlier attempts (Pouransari & Ghili, 2015), our downsampled ImageNet contains exactly the same images from 1000 classes as the original ImageNet but resized with box downsampling to 32 à 32 pixels. Thus, this dataset is substantially harder than the original ImageNet dataset because the average number of pixels per image is now two orders of magnitude smaller. The new dataset is also more difï¬ cult than the CIFAR datasets because more classes are used and the relevant objects to be classiï¬ ed often cover only a tiny subspace of the image and not most of the image as in the CIFAR datasets. We benchmarked SGD with momentum with the default learning rate schedule, SGDR with T0 = 1, Tmult = 2 and SGDR with T0 = 10, Tmult = 2 on WRN-28-10, all trained with 4 settings of
1608.03983#23
1608.03983#25
1608.03983
[ "1703.05051" ]
1608.03983#25
SGDR: Stochastic Gradient Descent with Warm Restarts
8 Published as a conference paper at ICLR 2017 Median Results on 14 datasets, Ir=0.025 is baseline n, =30 ep baseline n, =60 ep baseline n, =120 ep v So baseline Nyp=240 baseline n, =480 ep Test Error - Reference Error (%) â sGpR -2 10' 10° 10° Epochs Median Results on 14 datasets, Ir=0.05 is baseline n, =30 ep baseline n, =60 ep baseline n, =120 ep Nv o baseline Nyp=240 baseline n, =480 ep Test Error - Reference Error (%) o â
1608.03983#24
1608.03983#26
1608.03983
[ "1703.05051" ]
1608.03983#26
SGDR: Stochastic Gradient Descent with Warm Restarts
â sGpR 4 2 te) -2 10' 10° 10° Epochs is is baseline n, =30 ep baseline n, =60 ep baseline n, =120 ep baseline n, =30 ep baseline n, =60 ep baseline n, =120 ep v Nv So o baseline Nyp=240 baseline n, =480 ep baseline Nyp=240 baseline n, =480 ep Test Error - Reference Error (%) Test Error - Reference Error (%) o â sGpR â â sGpR 4 2 te) -2 -2 10' 10° 10° 10' 10° 10° Epochs Epochs Median Results on 14 datasets Mean Results on 14 datasets 3 3 i) Test Error - Reference Error (%) é oO Test Error : Reference Error (%) ° 2 2 a 2 3 3 a 2 3 10 10 10 10 10 10 Epochs Epochs Median Results on 14 datasets 3 Test Error - Reference Error (%) é oO 2 a 2 3 10 10 10 Epochs Mean Results on 14 datasets 3 i) Test Error : Reference Error (%) ° 2 3 a 2 3 10 10 10 Epochs Figure 4: (Top) Improvements obtained by the baseline learning rate schedule and SGDR w.r.t. the best known reference classiï¬ cation error on a dataset of electroencephalographic (EEG) recordings of brain activity for classiï¬ cation of actual right and left hand and foot movements of 14 subjects with roughly 1000 trials per subject. Both considered approaches were tested with the initial learn- ing rate lr = 0.025 (Top-Left) and lr = 0.05 (Top-Right). Note that the baseline approach is considered with different settings of the total number of epochs: 30, 60, . . ., 480. (Bottom) SGDR with lr = 0.025 and lr = 0.05 without and with M model snapshots taken at the last M = nr/2 restarts, where nr is the total number of restarts.
1608.03983#25
1608.03983#27
1608.03983
[ "1703.05051" ]
1608.03983#27
SGDR: Stochastic Gradient Descent with Warm Restarts
the initial learning rate ηi max: 0.050, 0.025, 0.01 and 0.005. We used the same data augmentation procedure as for the CIFAR datasets. Similarly to the results on the CIFAR datasets, Figure 5 shows that SGDR demonstrates better anytime performance. SGDR with T0 = 10, Tmult = 2, ηi max = 0.01 achieves top-1 error of 39.24% and top-5 error of 17.17% matching the original results by AlexNets (40.7% and 18.2%, respectively) obtained on the original ImageNet with full-size images of ca. 50 times more pixels per image (Krizhevsky et al., 2012b). Interestingly, when the dataset is permuted only within 10 subgroups each formed from 100 classes, SGDR also demonstrates better results (see Figure 8 in the Supplementary Material). An interpretation of this might be that while the initial learning rate seems to be very important, SGDR reduces the problem of improper selection of the latter by scanning / annealing from the initial learning rate to 0. Clearly, longer runs (more than 40 epochs considered in this preliminary experiment) and hyperpa- rameter tuning of learning rates, regularization and other hyperparameters shall further improve the results.
1608.03983#26
1608.03983#28
1608.03983
[ "1703.05051" ]