doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1608.04868 | 3 | Background: ⢠RNNs: RNNs are neural networks that have a unit with a recurrent connection, whose output is connect to the input of the unit (Figure 1, left). They cur- rently show state-of-the-art performances in tasks that in- volve sequence modelling. Two types of RNN unit are widely used: Long Short-Term Memory (LSTM) unit [10] and Gated Recurrent Unit (GRU) [3].
# 2. PROBLEM DEFINITION
The problem of music captioning can be deï¬ned as gener- ating a description for a set of music items using on their audio content and text data. When the set includes more than one item, it can be also called as music playlist cap- tioning.
1 Michael Jackson: Love songs and ballads by Apple Music 2 Your Coffee Break by Spotify
# 3. THE PROPOSED METHOD | 1608.04868#3 | Towards Music Captioning: Generating Music Playlist Descriptions | Descriptions are often provided along with recommendations to help users'
discovery. Recommending automatically generated music playlists (e.g.
personalised playlists) introduces the problem of generating descriptions. In
this paper, we propose a method for generating music playlist descriptions,
which is called as music captioning. In the proposed method, audio content
analysis and natural language processing are adopted to utilise the information
of each track. | http://arxiv.org/pdf/1608.04868 | Keunwoo Choi, George Fazekas, Brian McFee, Kyunghyun Cho, Mark Sandler | cs.MM, cs.AI, cs.CL | 2 pages, ISMIR 2016 Late-breaking/session extended abstract | null | cs.MM | 20160817 | 20170115 | [
{
"id": "1507.07998"
}
] |
1608.04868 | 4 | 1 Michael Jackson: Love songs and ballads by Apple Music 2 Your Coffee Break by Spotify
# 3. THE PROPOSED METHOD
© Keunwoo Choi, Gyérgy Fazekas, Mark Sandler, Brian McFee, Kyunghyun Cho. Licensed under a Creative Commons Attribu- tion 4.0 International License (CC BY 4.0). Attribution: © Keunwoo Choi, Gyérgy Fazekas, Mark Sandler, Brian McFee, Kyunghyun Cho. âTowards Music Captioning: Generating Music Playlist Descriptionsâ, Extended abstracts for the Late-Breaking Demo Session of the 17th In- ternational Society for Music Information Retrieval Conference, 2016.
Both of the approaches use sequence-to-sequence model, as illustrated in Figure 2. In the sequence-to-sequence model, the encoder consists of two-layer RNN with GRU and en- codes the track features into a vector, i.e., the encoded vec- tor summarises the information of the input. This vector is also called context vector because it provides context
1 2 3 yi wl w2 w âAT sepseq | 1 t2 audio text audio text | 1608.04868#4 | Towards Music Captioning: Generating Music Playlist Descriptions | Descriptions are often provided along with recommendations to help users'
discovery. Recommending automatically generated music playlists (e.g.
personalised playlists) introduces the problem of generating descriptions. In
this paper, we propose a method for generating music playlist descriptions,
which is called as music captioning. In the proposed method, audio content
analysis and natural language processing are adopted to utilise the information
of each track. | http://arxiv.org/pdf/1608.04868 | Keunwoo Choi, George Fazekas, Brian McFee, Kyunghyun Cho, Mark Sandler | cs.MM, cs.AI, cs.CL | 2 pages, ISMIR 2016 Late-breaking/session extended abstract | null | cs.MM | 20160817 | 20170115 | [
{
"id": "1507.07998"
}
] |
1608.04868 | 5 | 1 2 3 yi wl w2 w âAT sepseq | 1 t2 audio text audio text
Figure 2. The diagrams of two proposed approaches, where coloured blocks indicate trainable modules. The ï¬rst approach uses a pre-trained ConvNet (conv) and word2vec (w2v) and only sequence-to-sequence model is trained. In the second approach, the whole blocks are trained - a ConvNet to summarise the audio content, an RNN to summarise the text data of each track. An addi- tional labels (y) such as genres or tags can be provided to help the training.
information to the decoder. The decoder consists of two- layer RNN with GRU and decodes the context vector to a sequence of word or word embeddings. The models are written in Keras and uploaded online 3 [6].
# 3.1 Pre-training approach
This approach takes advantage of a pre-trained word em- bedding model 4 and a pre-trained auto-tagger 5 . There- fore, the number of parameters to learn is reduced while leveraging additional data to train word-embedding and auto-tagger. Each data sample consists of a sequence of N track features as input and an output word sequence length of M , which is an album feature. | 1608.04868#5 | Towards Music Captioning: Generating Music Playlist Descriptions | Descriptions are often provided along with recommendations to help users'
discovery. Recommending automatically generated music playlists (e.g.
personalised playlists) introduces the problem of generating descriptions. In
this paper, we propose a method for generating music playlist descriptions,
which is called as music captioning. In the proposed method, audio content
analysis and natural language processing are adopted to utilise the information
of each track. | http://arxiv.org/pdf/1608.04868 | Keunwoo Choi, George Fazekas, Brian McFee, Kyunghyun Cho, Mark Sandler | cs.MM, cs.AI, cs.CL | 2 pages, ISMIR 2016 Late-breaking/session extended abstract | null | cs.MM | 20160817 | 20170115 | [
{
"id": "1507.07998"
}
] |
1608.04868 | 6 | Input/Outpu{*} A n-th track feature, t® ⬠R°°°, rep- resents one track and is created by concatenating the audio feature, t? ⬠R°°, and the word feature, t®, ⬠R°°°, ie. t =[ta;ty]. For computing ta, a convolutional neural net- work that is trained to predict tags is used to output 50-dim vector for each track [5]. tw is computed by }>,, wx/K, where w;, refers to the embedding of k-th word in the metadatg ' | The word embedding were trained by word2vec algorithms and Google news dataset [13].
An playlist feature is a sequence of word embeddings of the playlist description, i.e. p = [wm]m=0,1,..mâ1.
# 3 http://github.com/keunwoochoi/
# ismir2016-ldb-audio-captioning-model-keras 4 https://radimrehurek.com/gensim/models/
# word2vec.html
# 5 https://github.com/keunwoochoi/music-auto_
tagging-keras, [5]
6 The dimensions can vary, we describe in details for better understanding. | 1608.04868#6 | Towards Music Captioning: Generating Music Playlist Descriptions | Descriptions are often provided along with recommendations to help users'
discovery. Recommending automatically generated music playlists (e.g.
personalised playlists) introduces the problem of generating descriptions. In
this paper, we propose a method for generating music playlist descriptions,
which is called as music captioning. In the proposed method, audio content
analysis and natural language processing are adopted to utilise the information
of each track. | http://arxiv.org/pdf/1608.04868 | Keunwoo Choi, George Fazekas, Brian McFee, Kyunghyun Cho, Mark Sandler | cs.MM, cs.AI, cs.CL | 2 pages, ISMIR 2016 Late-breaking/session extended abstract | null | cs.MM | 20160817 | 20170115 | [
{
"id": "1507.07998"
}
] |
1608.04868 | 7 | tagging-keras, [5]
6 The dimensions can vary, we describe in details for better understanding.
7 Because these word embeddings are distributed representations in a semantic vector space, average of the words can summarise a bag of words and was used as a baseline in sentence and paragraph representa- tion [7].
# 3.2 Fully-training approach
The model in this approach includes the training of a Con- vNet for audio summarisation and an RNN for text sum- marisation of each track. The structure of ConvNet can be similar to the pre-trained one. The RNN module is trained to summarise the text of each track and outputs a sentence vector. These networks can be provided with additional labels (notated as y in the ï¬gure 2) to help the training, e.g., genres or tags. In that case, the objective of the whole structure consists of two different tasks and therefore the training can be more regulated and stable.
Since the audio and text summarisation modules are trainable, they can be more relevant to the captioning task. However, this ï¬exibility requires more training data.
# 4. EXPERIMENTS AND CONCLUSIONS | 1608.04868#7 | Towards Music Captioning: Generating Music Playlist Descriptions | Descriptions are often provided along with recommendations to help users'
discovery. Recommending automatically generated music playlists (e.g.
personalised playlists) introduces the problem of generating descriptions. In
this paper, we propose a method for generating music playlist descriptions,
which is called as music captioning. In the proposed method, audio content
analysis and natural language processing are adopted to utilise the information
of each track. | http://arxiv.org/pdf/1608.04868 | Keunwoo Choi, George Fazekas, Brian McFee, Kyunghyun Cho, Mark Sandler | cs.MM, cs.AI, cs.CL | 2 pages, ISMIR 2016 Late-breaking/session extended abstract | null | cs.MM | 20160817 | 20170115 | [
{
"id": "1507.07998"
}
] |
1608.04868 | 8 | # 4. EXPERIMENTS AND CONCLUSIONS
We tested the pre-training approach with a private pro- duction music dataset. The dataset has 374 albums and 17,354 tracks with descriptions of tracks, albums, audio signal and metadata. The learning rate is controlled by ADAM [11] with an objective function of 1-cosine prox- imity. The model was trained to predict the album descrip- tions.
The model currently overï¬ts and fails to generate cor- rect sentences. One example of generated word sequence is dramatic motivating the intense epic action adventure soaring soaring soaring gloriously Roger Deakins cinematography Maryse Alberti. This is expected since there are only 374 output sequences in the dataset â if we use early stopping, the model underï¬ts, otherwise it overï¬ts.
In the future, we plan to solve the current problem â lack of data. The sentence generation can be partly trained by (music) corpora. A word2vec model that is trained with music corpora can be used to reduce the embedding di- mension [14]. The model can also be modiï¬ed in the sense that the audio feature is optional and it mainly relies on metadata. In that case, acquisition of training data becomes more feasible.
# 5. ACKNOWLEDGEMENTS | 1608.04868#8 | Towards Music Captioning: Generating Music Playlist Descriptions | Descriptions are often provided along with recommendations to help users'
discovery. Recommending automatically generated music playlists (e.g.
personalised playlists) introduces the problem of generating descriptions. In
this paper, we propose a method for generating music playlist descriptions,
which is called as music captioning. In the proposed method, audio content
analysis and natural language processing are adopted to utilise the information
of each track. | http://arxiv.org/pdf/1608.04868 | Keunwoo Choi, George Fazekas, Brian McFee, Kyunghyun Cho, Mark Sandler | cs.MM, cs.AI, cs.CL | 2 pages, ISMIR 2016 Late-breaking/session extended abstract | null | cs.MM | 20160817 | 20170115 | [
{
"id": "1507.07998"
}
] |
1608.04868 | 9 | # 5. ACKNOWLEDGEMENTS
This work was part funded by the FAST IMPACt EPSRC Grant EP/L019981/1 and the European Commission H2020 research and innovation grant AudioCommons (688382). Mark Sandler acknowledges the support of the Royal So- ciety as a recipient of a Wolfson Research Merit Award. Brian McFee is supported by the Moore Sloan Data Sci- ence Environment at NYU. Kyunghyun Cho thanks the support by Facebook, Google (Google Faculty Award 2016) and NVidia (GPU Center of Excellence 2015-2016). The work is done during Keunwoo Choi is visiting Center for Data Science in New York University.
# 6. REFERENCES
[1] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of machine Learn- ing research, 3(Jan):993â1022, 2003.
[2] Dmitry Bogdanov, Mart´ıN Haro, Ferdinand Fuhrmann, Anna Xamb´o, Emilia G´omez, and Perfecto Herrera. Semantic audio content-based music recommendation and visualization based on user preference examples. Information Processing & Management, 49(1):13â33, 2013. | 1608.04868#9 | Towards Music Captioning: Generating Music Playlist Descriptions | Descriptions are often provided along with recommendations to help users'
discovery. Recommending automatically generated music playlists (e.g.
personalised playlists) introduces the problem of generating descriptions. In
this paper, we propose a method for generating music playlist descriptions,
which is called as music captioning. In the proposed method, audio content
analysis and natural language processing are adopted to utilise the information
of each track. | http://arxiv.org/pdf/1608.04868 | Keunwoo Choi, George Fazekas, Brian McFee, Kyunghyun Cho, Mark Sandler | cs.MM, cs.AI, cs.CL | 2 pages, ISMIR 2016 Late-breaking/session extended abstract | null | cs.MM | 20160817 | 20170115 | [
{
"id": "1507.07998"
}
] |
1608.04868 | 10 | [3] Kyunghyun Cho, Bart Van Merri¨enboer, Dzmitry Bah- danau, and Yoshua Bengio. On the properties of neu- ral machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
[4] Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase rep- resentations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
[5] Keunwoo Choi, George Fazekas, and Mark Sandler. Automatic tagging using deep convolutional neural networks. In International Society of Music Informa- tion Retrieval Conference. ISMIR, 2016.
[6] Franc¸ois Chollet. Keras. GitHub https://github. com/fchollet/keras, 2015. repository:
[7] Andrew M Dai, Christopher Olah, and Quoc V Le. Document embedding with paragraph vectors. arXiv preprint arXiv:1507.07998, 2015. | 1608.04868#10 | Towards Music Captioning: Generating Music Playlist Descriptions | Descriptions are often provided along with recommendations to help users'
discovery. Recommending automatically generated music playlists (e.g.
personalised playlists) introduces the problem of generating descriptions. In
this paper, we propose a method for generating music playlist descriptions,
which is called as music captioning. In the proposed method, audio content
analysis and natural language processing are adopted to utilise the information
of each track. | http://arxiv.org/pdf/1608.04868 | Keunwoo Choi, George Fazekas, Brian McFee, Kyunghyun Cho, Mark Sandler | cs.MM, cs.AI, cs.CL | 2 pages, ISMIR 2016 Late-breaking/session extended abstract | null | cs.MM | 20160817 | 20170115 | [
{
"id": "1507.07998"
}
] |
1608.04868 | 11 | [8] Douglas Eck, Paul Lamere, Thierry Bertin-Mahieux, and Stephen Green. Automatic generation of social tags for music recommendation. In Advances in neural information processing systems, pages 385â392, 2008.
[9] Ben Fields, Christophe Rhodes, Mark dâInverno, et al. Using song social tags and topic models to describe and compare playlists. In 1st Workshop On Music Recom- mendation And Discovery (WOMRAD), ACM RecSys, 2010, Barcelona, Spain, 2010.
[10] Sepp Hochreiter and J¨urgen Schmidhuber. Long short- term memory. Neural computation, 9(8):1735â1780, 1997.
A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[12] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[13] T Mikolov and J Dean. Distributed representations of words and phrases and their compositionality. Ad- vances in neural information processing systems, 2013. | 1608.04868#11 | Towards Music Captioning: Generating Music Playlist Descriptions | Descriptions are often provided along with recommendations to help users'
discovery. Recommending automatically generated music playlists (e.g.
personalised playlists) introduces the problem of generating descriptions. In
this paper, we propose a method for generating music playlist descriptions,
which is called as music captioning. In the proposed method, audio content
analysis and natural language processing are adopted to utilise the information
of each track. | http://arxiv.org/pdf/1608.04868 | Keunwoo Choi, George Fazekas, Brian McFee, Kyunghyun Cho, Mark Sandler | cs.MM, cs.AI, cs.CL | 2 pages, ISMIR 2016 Late-breaking/session extended abstract | null | cs.MM | 20160817 | 20170115 | [
{
"id": "1507.07998"
}
] |
1608.04868 | 12 | [13] T Mikolov and J Dean. Distributed representations of words and phrases and their compositionality. Ad- vances in neural information processing systems, 2013.
[14] Sergio Oramas, Luies Espinosa-Anke, Shuo Zhang, Horacio Saggion, and Xavier Serra. Natural language processing for music information retrieval. In 17th In- ternational Society for Music Information Retrieval Conference (ISMIR 2016), 2016. | 1608.04868#12 | Towards Music Captioning: Generating Music Playlist Descriptions | Descriptions are often provided along with recommendations to help users'
discovery. Recommending automatically generated music playlists (e.g.
personalised playlists) introduces the problem of generating descriptions. In
this paper, we propose a method for generating music playlist descriptions,
which is called as music captioning. In the proposed method, audio content
analysis and natural language processing are adopted to utilise the information
of each track. | http://arxiv.org/pdf/1608.04868 | Keunwoo Choi, George Fazekas, Brian McFee, Kyunghyun Cho, Mark Sandler | cs.MM, cs.AI, cs.CL | 2 pages, ISMIR 2016 Late-breaking/session extended abstract | null | cs.MM | 20160817 | 20170115 | [
{
"id": "1507.07998"
}
] |
1608.04337 | 0 | 7 1 0 2
n a J 4 2 ] V C . s c [ 2 v 7 3 3 4 0 . 8 0 6 1 : v i X r a
# Design of Efï¬cient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial âBottleneckâ Structure
Min Wang Department of EECS University of Central Florida Orlando, FL 32816 [email protected] Baoyuan Liu Department of EECS University of Central Florida Orlando, FL 32816 [email protected] Hassan Foroosh Department of EECS University of Central Florida Orlando, FL 32816 [email protected]
# Abstract | 1608.04337#0 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 1 | # Abstract
Deep convolutional neural networks achieve remarkable visual recognition performance, at the cost of high compu- tational complexity. In this paper, we have a new design of efï¬cient convolutional layers based on three schemes. The 3D convolution operation in a convolutional layer can be considered as performing spatial convolution in each chan- nel and linear projection across channels simultaneously. By unravelling them and arranging the spatial convolu- tion sequentially, the proposed layer is composed of a sin- gle intra-channel convolution, of which the computation is negligible, and a linear channel projection. A topological subdivisioning is adopted to reduce the connection between the input channels and output channels. Additionally, we also introduce a spatial âbottleneckâ structure that utilizes a convolution-projection-deconvolution pipeline to take ad- vantage of the correlation between adjacent pixels in the input. Our experiments demonstrate that the proposed lay- ers remarkably outperform the standard convolutional lay- ers with regard to accuracy/complexity ratio. Our models achieve similar accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less computation respec- tively. | 1608.04337#1 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 2 | consuming building block of the CNN, the convolutional layer, is performed by convolving the 3D input data with a series of 3D kernels. The computational complexity is quadratic in both the kernel size and the number of chan- nels. To achieve state-of-the-art performance, the number of channels needs to be a few hundred, especially for the layers with smaller spatial input dimension, and the kernel size is generally no less than 3.
Several attempts have been made to reduce the amount of computation and parameters in both convolutional lay- ers and fully connected layers. Low rank decomposi- tion has been extensively explored in various fashions [7][8][9][10][11] to obtain moderate efï¬ciency improve- ment. Sparse decomposition based methods [12][13] achieve higher theoretical reduction of complexity, while the actual speedup is bounded by the efï¬ciency of sparse multiplication implementations. Most of these decomposition-based methods start from a pre-trained model, and perform decomposition and ï¬ne-tuning based on it, while trying to maintain similar accuracy. This essen- tially precludes the option of improving efï¬ciency by de- signing and training new CNN models from scratch.
# 1. Introduction | 1608.04337#2 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 3 | # 1. Introduction
Deep convolutional neural networks (CNN) have made signiï¬cant improvement on solving visual recognition prob- lems since the famous work by Krizhevsky et al. in 2012 [1][2][3][4][5]. Thanks to their deep structure, vision ori- ented layer designs, and efï¬cient training schemes, recent CNN models from Google [4] and MSRA [5] obtain better than human level accuracy on ImageNet ILSVRC dataset [6]. | 1608.04337#3 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 4 | The computational complexity for the state-of-the-art models for both training and inference are extremely high, requiring several GPUs or cluster of CPUs. The most timeOn the other hand, in recent state-of-the-art deep CNN models, several heuristics are adopted to alleviate the bur- den of heavy computation. In [2], the number of channels are reduced by a linear projection before the actual convolu- tional layer; In [5], the authors utilize a bottleneck structure, in which both the input and the output channels are reduced by linear projection; In [4], 1Ãn and nÃ1 asymmetric con- volutions are adopted to achieve larger kernel sizes. While these strategies to some extent help to design moderately ef- ï¬cient and deep models in practice, they are not able to pro- vide a comprehensive analysis of optimizing the efï¬ciency of the convolutional layer.
In this work, we propose several schemes to improve the efï¬ciency of convolutional layers. In standard convolu- tional layers, the 3D convolution can be considered as per- forming intra-channel spatial convolution and linear chan- nel projection simultaneously, leading to highly redundant
1
ReLU
(a) Standard Convolutional Layer | 1608.04337#4 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 5 | 1
ReLU
(a) Standard Convolutional Layer
computation. These two operations are ï¬rst unraveled to a set of 2D convolutions in each channel and subsequent lin- ear channel projection. Then, we make the further mod- iï¬cation of performing the 2D convolutions sequentially In this way, we obtain a single rather than in parallel. intra-channel convolutional (SIC) layer that involves only one ï¬lter for each input channel and linear channel projec- tion, thus achieving signiï¬cantly reduced complexity. By stacking multiple SIC layers, we can train models that are several times more efï¬cient with similar or higher accuracy than models based on standard convolutional layer.
In a SIC layer, linear channel projection consumes the majority of the computation. To reduce its complexity, we propose a topological subdivisioning framework between the input channels and output channels as follows: The in- put channels and the output channels are ï¬rst rearranged into a s-dimensional tensor, then each output channel is only connected to the input channels that are within its local neighborhood. Such a framework leads to a regular sparsity pattern of the convolutional kernels, which is shown to pos- sess a better performance/cost ratio than standard convolu- tional layer in our experiments. | 1608.04337#5 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 6 | Furthermore, we design a spatial âbottleneckâ structure to take advantage of the local correlation of adjacent pix- els in the input. The spatial dimensions are ï¬rst reduced by intra-channel convolution with stride, then recovered by de- convolution with the same stride after linear channel projec- tion. Such a design reduces the complexity of linear channel projection without sacriï¬cing the spatial resolution.
(b) Single Intra-Channel Convolutional Layer
Figure 1. Illustration of the convolution pipeline of standard con- volutional layer and Single Intra-channel Convolutional Layer. In SIC layer, only one 2D ï¬lter is convolved with each input channel.
The above three schemes (SIC layer, topological subdi- visioning and spatial âbottleneckâ structure) attempt to im- prove the efï¬ciency of traditional CNN models from dif- ferent perspectives, and can be easily combined together to achieve lower complexity as demonstrated thoroughly in the remainder of this paper. Each of these schemes will be ex- plained in detail in Section 2, evaluated against traditional CNN models, and analyzed in Section 3.
# 2.1. Standard Convolutional Layer | 1608.04337#6 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 7 | # 2.1. Standard Convolutional Layer
Consider the input data I in RhÃwÃn, where h, w and n are the height, width and the number of channels of the input feature maps, and the convolutional kernel K in RkÃkÃnÃn, where k is size of the convolutional kernel and n is the number of output channels. The operation of a stan- dard convolutional layer O â RhÃwÃn = K â I is given by Algorithm 1. The complexity of a convolutional layer mea- sured by the number of multiplications is
# 2. Method
n2k2hw (1)
In this section, we ï¬rst review the standard convolutional layer, then introduce the proposed schemes. For the purpose of easy understanding, the ï¬rst two schemes are explained with mathematical equations and pseudo-code, as well as illustrated with graphical visualization in Figure 5.
Since the complexity is quadratic with the kernel size, in most recent CNN models, the kernel size is limited to 3 Ã 3 to control the overall running time.
# 2.2. Single Intra-Channel Convolutional Layer | 1608.04337#7 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 8 | # 2.2. Single Intra-Channel Convolutional Layer
We make the assumption that the number of output chan- nels is equal to the number of input channels, and the in- put is padded so that the spatial dimensions of output is the same as input. We also assume that the residual learning technique is applied to each convolutional layer, namely the input is directly added to the output since they have the same dimension.
In standard convolutional layers, the output features are produced by convolving a group of 3D kernels with the in- put features along the spatial dimensions. Such a 3D con- volution operation can be considered as a combination of 2D spatial convolution inside each channel and linear pro- jection across channels. For each output channel, a spatial
Algorithm 1: Standard Convolutional Layer Input: I ¢ Râ*exâ Parameter: K ⬠R***xnxn Intermediate Data: I ¢ R(+#â-1)x(wtk-Dxn Output: O ¢ Râ*ex" I = zero-padding(I, "5+) for y = 1toh, x=1tow, 7 =1tondo Oly, 2.5) = nek LYLE Ku, v, i,j (yt+u-l,a+v-1,i) i=1 u=lv=1 end | 1608.04337#8 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 9 | convolution is performed on each input channel. The spatial convolution is able to capture local structural information, while the linear projection transforms the feature space for learning the necessary non-linearity in the neuron layers. When the number of input and output channels is large, typ- ically hundreds, such a 3D convolutional layer requires an exorbitant amount of computation.
A natural idea is, the 2D spatial convolution and linear channel projection can be unraveled and performed sepa- rately. Each input channel is ï¬rst convolved with b 2D ï¬lters, generating intermediate features that have b times channels of the input. Then the output is generated by lin- ear channel projection. Unravelling these two operations provides us more freedom of model design by tuning both b and k. The complexity of such a layer is
b(nk2 + n2)hw (2)
Typically, k is much smaller than n. The complexity is approximately linear with b. When b = k2, this is equiva- lent to a linear decomposition of the standard convolutional layers [12]. When b < k2, the complexity is lower than the standard convolutional layer in a low-rank fashion. | 1608.04337#9 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 10 | Our key observation is that instead of convolving b 2D ï¬lters with each input channel simultaneously, we can per- form the convolutions sequentially. The above convolu- tional layer with b ï¬lters can be transformed to a frame- work that has b layers. In each layer, each input channel is ï¬rst convolved with single 2D ï¬lter, then a linear pro- jection is applied to all the input channels to generate the output channels. In this way, the number of channels are maintained the same throughout all b layers. Algorithm. 2 formally describes this framework.
When we consider each of the b layers, only one k à k kernel is convolved with each input channel. This seems to be a risky choice. Convolving with only one ï¬lter will not be able to preserve all the information from the input data, and there is very little freedom to learn all the useful local structures. Actually, this will probably lead to a low pass ï¬lter, which is somewhat equivalent to the ï¬rst principal component of the image. However, the existence of resid- ual learning module helps to overcome this disadvantage. | 1608.04337#10 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 11 | With residual learning, the input is added to the output. The subsequent layers thus receive information from both the initial input and the output of preceding layers. Figure. 5 presents a visual comparison between the proposed method and standard convolutional layer.
Algorithm 2: Single Intra-Channel Convolutional Layer Input: I â RhÃwÃn Parameter: K â RkÃkÃn, P â RnÃn Intermediate Data: ËI â R(h+kâ1)Ã(w+kâ1)Ãn, | 1608.04337#11 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 12 | Input: Ie Rhxwxn Parameter: K ⬠R****", P Ee R"â¢â Intermediate Data: I ¢ R(!tk-))x(w+k-Dxn, Ge Rrxwxn Output: Oe Rhxwxn O=I // Initialize output as input I = zero-padding(I, a) fori=1tobdo// Repeat this layer b times for y =1toh, x=1tow, j =1tondo Gly, x, 3) = > SOK(u,v, Ay +u-âl1,¢+vâ-1,j) u=1v=1 end for y =1toh, x=1tow, 1=1tondo Oly, 2,1) = O(y, 2,1) + > Gly, 2,7) PG.) j=l end O = max(O,0) // ReLU I = zero-padding(O, at) end
# 2.3. Topologica Subdivisioning | 1608.04337#12 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 13 | # 2.3. Topologica Subdivisioning
Given that the standard convolutional layer boils down to single intra-channel convolution and linear projection in the SIC layer, we make further attempt to reduce the com- plexity of linear projection. In [12], the authors proved that extremely high sparsity could be accomplished without sac- riï¬cing accuracy. While the sparsity was obtained by ï¬ne- tuning and did not possess any structure, we study to build the sparsity with more regularity. Inspired by the topolog- ical ICA framework in [14], we propose a s-dimensional topological subdivisioning between the input and output channels in the convolutional layers. Assuming the number of input channels and output channels are both n, we ï¬rst arrange the input and output channels as an s-dimensional tensor [d1, d2, ..., ds].
8 [[ a=»: (3) i=1
Each output channel is only connected to its local neighbors in the tensor space rather than all input channels. The size of
Intra-channel Projet ecw | 1608.04337#13 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 14 | (a) 2D Topology
Figure 3. Illustration of Spatial âBottleneckâ Framework
In this section, we introduce a spatial âbottleneckâ struc- ture that reduces the amount of computation without de- creasing either the spatial resolution or the number of chan- nels by exploiting the spatial redundancy of the input.
Consider the 3D input data I in RhÃwÃn, we ï¬rst apply a single intra-channel convolution to each input channel as was introduced in Section 2.2. A k à k kernel is convolved with each input channel with stride k, so that the output k à w dimension is reduced to R h k Ãn. Then a linear projection layer is applied. Finally, We perform a k à k intra-channel deconvolution with stride k to recover the spatial resolution. Figure. 3 illustrates the proposed spatial âbottleneckâ
(b) 3D Topology
Figure 2. 2D &3D topology for input and output.
the local neighborhood is deï¬ned by another s-dimensional tensor, [c1, c2, ..., cs], and the total number of neighbors for each output channel is
s Il G=c (4) i=1 | 1608.04337#14 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 15 | s Il G=c (4) i=1
Algorithm 3: Convolutional Layer with Topological Subdivisioning Input: I ¢ Râ*<exr Parameter: []_, dj = n;c¢; < di, Vi = 1...s; K â¬
The complexity of the proposed topologically subdivi- sioned convolutional layers compared to the standard con- volutional layers can be simply measured by c n . Figure. 2 illustrate the 2D and 3D topological subdivisioning be- tween the input channels and the output channels. A formal description of this layer is presented in Algorithm 3. | 1608.04337#15 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 16 | i=1 di = n; ci ⤠di, âi = 1...s; K â RkÃkÃd1Ã..ÃdsÃc1Ã...Ãcs Intermediate Data: ËI â R(h+kâ1)Ã(w+kâ1)Ãn,ËI â R(h+kâ1)Ã(w+kâ1)Ãd1Ã...Ãds Output: O â RhÃwÃd1Ã...Ãds ËI = zero-padding(I, kâ1 2 ) Rearrange ËI to ËI for y = 1 to h, x = 1 to w, j1 = 1to d1, ... js = 1to ds do // Topological Subdivisioning
When k = 1, the algorithm is suitable for the linear pro- jection layer, and can be directly embedded into Algorithm 2 to further reduce the complexity of the SIC layer.
# 2.4. Spatial âBottleneckâ Structure | 1608.04337#16 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 17 | # 2.4. Spatial âBottleneckâ Structure
In the design of traditional CNN models, there has al- ways been a trade-off between the spatial dimensions and the number of channels. While high spatial resolution is necessary to preserve detailed local information, large num- ber of channels produce high dimensional feature spaces and learn more complex representations.The complexity of one convolutional layer is determined by the product of these two factors. To maintain an acceptable complexity, the spatial dimensions are reduced by max pooling or stride convolution while the number of channels are increased.
Oly, &, jis 5 i) = ->>. Dy S K(u, 0, jays Js5 tty +s ts): h=1 i,=lujv=1 I(ytu-latv-l, (j1 + 1 â 2)%di +1, (is + is â 2)%ds +1)
On the other hand, the adjacent pixels in the input of each convolutional layers are correlated, in a similar fash- ion to the image domain, especially when the spatial res- olution is high. While reducing the resolution by simple sub-sampling will obviously lead to a loss of information, such correlation presents considerable redundancy that can be taken advantage of.
# end | 1608.04337#17 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 18 | # end
Stage Output 1082 1 2 362 A B C (7, 64)2 3 Ã 3 max pooling , stride 3 (1, 128) D E (3, 128) Ã 2 [3, 4, 128] Ã 2 < 3, 128 > Ã4 < 5, 128 > Ã4 < 3, 128 > Ã6 3 182 2 Ã 2 max pooling , stride 2 (1, 256) (3, 256) Ã 2 [3, 4, 256] Ã 2 < 3, 256 > Ã4 < 5, 256 > Ã4 < 3, 256 > Ã6 4 62 3 Ã 3 max pooling , stride 3 (1, 512) (3, 512) Ã 2 [3, 4, 512] Ã 2 < 3, 512 > Ã4 < 5, 512 > Ã4 < 3, 512 > Ã6 12 (1, 1024) 6 Ã 6 average pooling, stride 6 fully connected, 2048 fully connected, 1000 softmax | 1608.04337#18 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 19 | Table 1. Conï¬gurations of baseline models and models with proposed SIC layers . For each convolutional layer, we use numbers in brackets to represent its conï¬guration. k denotes the kernel size. n is the number of output channels. Different types of bracket correspond to different convolutional layer. (k, n) is a typical standard convolutional layer. [k, b, n] denotes an unraveled convolutional layer with b ï¬lters for each input channel. < k, n > represents our SIC layer. The number after the brackets indicates the times that the layer is repeated in each stage.
framework. The spatial resolution of the data is ï¬rst re- duced, then expanded, forming a bottleneck structure. In this 3-phase structure, the linear projection phase , which consumes most of the computation, is k2 times more efï¬- cient than plain linear projection on the original input. The intra-channel convolution and deconvolution phases learn to capture the local correlation of adjacent pixels, while main- taining the spatial resolution of the output.
Stage Intra-channel Convolution Linear Projection 4 3 2 6.6% 1.7% 3.4% 93.4% 96.6% 98.3% | 1608.04337#19 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 21 | # 3. Experiments
We evaluate the performance of our method on the Im- ageNet LSVRC 2012 dataset, which contains 1000 cate- gories, with 1.2M training images, 50K validation images, and 100K test images. We use Torch to train the CNN mod- els in our framework. Our method is implemented with CUDA and Lua based on the Torch platform. The images are ï¬rst resized to 256 à 256, then randomly cropped into 221 à 221 and ï¬ipped horizontally while training. Batch normalization [3] is placed after each convolutional layer and before the ReLU layer. We also adopt the dropout [15] strategy with a ratio of 0.2 during training. Standard stochastic gradient descent with mini-batch containing 256 images is used to train the model. We start the learning rate from 0.1 and divide it by a factor of 10 every 30 epochs. Each model is trained for 100 epochs. For batch normal- ization, we use exponential moving average to calculate the batch statistics as is implemented in CuDNN [16]. The code is run on a server with 4 Pascal Titan X GPU. For all the models evaluated below, the top-1 and top-5 error of valida- tion set with central cropping is reported.
We evaluate the performance and efï¬ciency of a series of | 1608.04337#21 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 22 | We evaluate the performance and efï¬ciency of a series of
models designed using the proposed efï¬cient convolutional layer. To make cross reference easier and help the readers keep track of all the models, each model is indexed with a capital letter.
We compare our method with a baseline CNN model that is built from standard convolutional layers. The details of the baseline models are given in Table 1. The convolutional layers are divided into stages according to their spatial di- mensions. Inside each stage, the convolutional kernels are performed with paddings so that the output has the same spatial dimensions as the input. Across the stages, the spa- tial dimensions are reduced by max pooling and the num- ber of channels are doubled by 1 à 1 convolutional layer. One fully connected layer with dropout is added before the logistic regression layer for ï¬nal classiï¬cation. Residual learning is added after every convolutional layer with same number of input and output channels. | 1608.04337#22 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 23 | We evaluate the performance of our method by substitut- ing the standard convolutional layers in the baseline mod- els with the proposed Single Intra-Channel Convolutional (SIC) layers. We leave the 7 à 7 convolutional layer in the ï¬rst stage and the 1 à 1 convolutional layers across stages the same, and only substitute the 3 à 3 convolutional layers.
Model A B C D E kernel size 3 3 3 5 3 2 2 4 4 6 30.67% 30.69% 29.78% 29.23% 28.83% 11.24% 11.27% 10.78% 10.48% 9.88% 1 Ë4/9 Ë2/9 Ë2/9 Ë1/3
Table 3. Top-1 & Top-5 error and complexity per stage of model A to E. The models with proposed design (model C, D, E)demonstrate signiï¬cantly better accuracy / complexity ratio than the baseline model.
In the following sections, the relative complexities are also measured with regards to these layers.
convolutional layers in the baseline model, so the overall complexity per stage is reduced by a factor of 2.
# 3.1. Single Intra-Channel Convolutional Layer | 1608.04337#23 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 24 | # 3.1. Single Intra-Channel Convolutional Layer
We ï¬rst substitute the standard convolutional layer with the unraveled convolution conï¬guration in model B. Each input channel is convolved with 4 ï¬lters, so that the com- plexity of B is approximately 4 9 of the baseline model A. In model C , we use two SIC layers to replace one standard convolutional layer. Even though our model C has more layers than the baseline model A, its complexity is only 2 9 of model A. In model E, we increase the number of SIC layers from 4 in model C to 6 in model E. The complexity of model E is only 1 3 of the baseline. Due to the extremely low complexity of the SIC layer, we can easily increase the model depth without too much increase of the computation. Table. 2 lists the distribution of computation between the intra-channel convolution and linear channel projection of each SIC layer in model C. The intra-channel convolution generally consumes less than 10% of the total layer com- putation. Thanks to this advantage, we can utilize a larger kernel size with only a small sacriï¬ce of efï¬ciency. Model D is obtained by setting the kernel size of model C to 5. | 1608.04337#24 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 25 | Table 3 lists the top-1 and top-5 errors and the complex- ity of models from A to E. Comparing model B and A, with same number of layers, model B can match the accuracy of model A with less than half computation. When comparing the SIC based model C with model B, model C reduces the top-1 error by 1% with half complexity. This veriï¬es the superior efï¬ciency of the proposed SIC layer. With 5 à 5 kernels, model E obtains 0.5% accuracy gain with as low as 5% increase of complexity on average. This demonstrates that increasing kernel size in SIC layer provides us another choice of improving the accuracy/complexity ratio.
# 3.2. Topological Subdivisioning
We ï¬rst compare the performance of two different topo- logical conï¬gurations against the baseline model. Model F adopts 2D topology and ci = di/2 for both dimensions, which leads to a reduction of complexity by a factor of 4. In Model G, we use 3D topology and set ci and di, so that the complexity is reduced by a factor of 4.27. The details of the network conï¬guration are listed in Table 4. The num- ber of topological layers is twice the number of standard | 1608.04337#25 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 26 | Stage 2 3 4 #Channels 128 256 512 2D topology d1 Ã d2 c1 Ã c2 8 Ã 16 4 Ã 8 16 Ã 16 8 Ã 8 16 Ã 32 8 Ã 16 3D topology d1 Ã d2 Ã d3 c1 Ã c2 Ã c3 4 Ã 8 Ã 4 2 Ã 5 Ã 3 8 Ã 8 Ã 4 4 Ã 5 Ã 3 8 Ã 8 Ã 8 4 Ã 5 Ã 6
Table 4. Conï¬gurations of model F and G that use 2D and 3D topological subdivisioning. di and ci stand for the tensor and neighbor dimensions in Algorithm 3. They are designed so that the complexity is reduced by (approximately for 3D) a factor of 4.
As a comparison, we also train a model H using the straightforward grouping strategy introduced in [1]. Both the input and output channels are divided into 4 groups. The output channels in each group are only dependent on the in- put channels in the corresponding group. The complexity is also reduced 4 times in this manner. Table 5 lists the top-1 & top-5 error rate and complexities of model F to H. Both the 2D and the 3D topology models outperform the grouping method with lower error rate while maintaining the same complexity. When compared with the baseline model, both of the two topology models achieve similar top-1 and top-5 error rate with half the computation. | 1608.04337#26 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 27 | Finally, we apply the topological subdivisioning to the SIC layer in model I. We choose 2D topology based on the In model I, there are 8 convolutional results in Table 5. layers for each stage, due to the layer doubling caused by both the SIC layer and the topological subdivisioning. The complexity of each layer is, however, approximately as low as 1 36 of a standard 3 Ã 3 convolutional layer. Compared to the baseline model, 2D topology together with SIC layer achieves similar error rate while being 9 times faster.
# 3.3. Spatial âBottleneckâ Structure
In our evaluation of layers with spatial âbottleneckâ structure, both the kernel size and the stride of the in- channel convolution and deconvolution is set to 2. The com- plexity of such a conï¬guration is a quarter of a SIC layer.
Model Methods Baseline Grouping 2D Top 3D Top SIC+2D A H F G I Top-5 Top-1 30.67% 11.24% 31.23% 11.73% 30.53% 11.28% 30.69% 11.38% 30.78% 11.29% Complexity 1 Ë1/2 Ë1/2 Ë15/32 Ë1/9 | 1608.04337#27 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 28 | Table 5. Top-1&Top-5 error rate and complexity of topology mod- els and grouping model.
Both model J and model K are modiï¬ed from model C by replacing SIC layers with spatial âbottleneckâ layers. One SIC layer is substituted with two Spatial âBottleneckâ lay- ers, the ï¬rst one with no padding and the second one with one pixel padding, leading to a 50% complexity reduction. In model J, every other SIC layer is substituted; In model K, all SIC layers are substituted. Table 6 compares their performance with the baseline model and SIC based model. Compared to the SIC model C, model J reduces the com- plexity by 25% with no loss of accuracy; model K reduces the complexity by 50% with a slight drop of accuracy. Com- pared to the baseline model A, model K achieves 9 times speedup with similar accuracy.
Model A C J K #layers Top-1 err. Top-5 err. Complexity 2 4 6 8 30.67% 29.78% 29.72% 30.78% 11.24% 10.78% 10.66% 11. 34% 1 Ë2/9 Ë1/6 Ë1/9 | 1608.04337#28 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 29 | Table 6. Top-1&Top-5 error rate and complexity of SIC layer with spatial âbottleneckâ structure.
# 3.4. Comparison with standard CNN models
In this section, we increase the depth of our models to compare with recent state-of-the-art CNN models. To go deeper but without increasing too much complexity, we adopt the channel-wise bottleneck structure similar to the one introduced in [5]. In each channel-wise bottleneck structure, the number of channels are ï¬rst reduced by half by the ï¬rst layer, then recovered by the second layer. Such a two-layer bottleneck structure has almost the same com- plexity to single layer with the same input and output chan- nels, thus increase the overall depth of the network. | 1608.04337#29 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 30 | We gradually increase the number of SIC layers with channel-wise bottleneck structure in each stage from 8 to 40, and compare their complexity to recent CNN models with similar accuracies. Model L , M, N and O correspond to the number of layers of 8, 12, 24, and 40, respectively. Due to training memory limitation, only the SIC layer is used in models in this section. While model L and M have the same spatial dimensions and stage structures as in Table 1, model N and O adopt the same structure as in [5]. They have different pooling strides and one more stages right af- ter the ï¬rst 7 à 7 convolutional layer. The detailed model
< 75 °° * a * eS |s = 8â » 5 3 2 % Alexnext = 65 HM Googlenet 2 D> ResNet-18 F a ResNet-34 we ResNet-50 @ ResNet-101 60 Our model L * A Our model M @ Our model N 55 @ Our model O 0 1000 2000 3000 4000 5000 6000 7000 8000 # Multiplications(10°)
Figure 4. Comparing top-1 accuracy and complexity between our model and several previous work
conï¬gurations are put in the supplemental materials. | 1608.04337#30 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 31 | Figure 4. Comparing top-1 accuracy and complexity between our model and several previous work
conï¬gurations are put in the supplemental materials.
Figure 4 compares the accuracy and complexity of our model from L to O with several previous works. Table 7 lists the detailed results. Figure 4 provides a visual compar- ison in the form of scattered plot. The red marks in the ï¬g- ure represent our models. All of our models demonstrate re- markably lower complexity while being as accurate. Com- pared to VGG, Resnet-34, Resnet-50 and Resnet-101 mod- els, our models are 42Ã, 7.3Ã, 4.5Ã, 6.5à more efï¬cient respectively with similar or lower top-1 or top-5 error.
# 3.5. Visualization of ï¬lters | 1608.04337#31 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 32 | # 3.5. Visualization of ï¬lters
Given the exceptionally good performance of the pro- posed methods, one might wonder what type of kernels are actually learned and how they compare with the ones in traditional convolutional layers. We randomly chose some kernels in the single intra-channel convolutional layers and the traditional convolutional layers, and visualize them side by side in Figure 5 to make an intuitive comparison. Both 3 à 3 kernels and 5 à 5 kernels are shown in the ï¬gure. The kernels learned by the proposed method demonstrate much higher level of regularized structure, while the kernels in standard convolutional layers exhibit more randomness. We attribute this to the stronger regularization caused by the reduction of number of ï¬lters.
# 3.6. Discussion on implementation details
In both SIC layer and spatial âbottleneckâ structure , most of the computation is consumed by the linear channel projection, which is basically a matrix multiplication. The 2D spatial convolution in each channel has similar complex- ity to a max pooling layer. Memory access takes most run- ning time due to low amount of computation. The efï¬ciency of our CUDA based implementation is similar to the open source libraries like Caffe and Torch. We believe higher ef- ï¬ciency can be easily achieved with an expert-level GPU | 1608.04337#32 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 33 | Model AlexNet GoogleNet ResNet 18 VGG Our Model L ResNet 34 Top-1 err. Top-5 err. 18.2% 10.07% 10.76% 9.9% 9.9% 8.74% Our Model M 27.07% 8.93% 7.8% 7.58% 7.1% 7.12% 42.5% 31.5% 30.43% 28.5% 28.29% 26.73% ResNet 50 Our Model N ResNet 101 Our Model O 24.7% 24.76% 23.6% 23.99% # of Multiplications 725M 1600M 1800M 16000M 381M 3600M 490M 3800M 845M 7600M 1172M
Table 7. Top-1 and Top-5 error rate of single-crop testing with single model, number of multiplication of our model and several previous work. The numbers in this table are generated with single model and center-crop. For AlexNet and GoogLeNet, the top-1 error is missing in original paper and we use the number of Caffeâs implementation[17]. For ResNet-34, we use the number with Facebookâs implementation[18]. | 1608.04337#33 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 34 | BRUYVILUE DEENA eo BAFAUSRER EAPAOREe ed ee ELLER. Q985959560o ee ee Pe ERVCSRlk GA Aawaoe Ste GI WORE AE WWGVlkhl CRAGoaoF WEEE AIE AGB Sar SOREUEEE SSO TEIG BONAIRE OTe tkeea SAC EAP RSE Genoa Ed 0 a A PERM Aon Ce ee Lo | fd eel DOBRO Eo oe | Pd | | DEBRA e Pe EE a fated fal # eel + Ln BOOB oRE aS fame a | AHO pate foe | | | DHROR SEP BEBSEoo
Lo | fd eel DOBRO Eo oe | Pd | | DEBRA e Pe EE a fated fal # eel + Ln BOOB oRE aS fame a | AHO pate foe | | | DHROR SEP BEBSEoo
DEENA eo EAPAOREe Q985959560o ee Pe GA Aawaoe WORE AE CRAGoaoF AGB Sar SSO TEIG OTe tkeea Genoa
Ed 0 a A PERM Aon Ce ee | 1608.04337#34 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 35 | Ed 0 a A PERM Aon Ce ee
BRUYVILUE BAFAUSRER ed ee ELLER. ee ERVCSRlk Ste GI WWGVlkhl WEEE AIE SOREUEEE BONAIRE SAC EAP RSE
(a) 3 Ã 3 standard convolutional layer (b) 3 Ã 3 single intra-channel convolutional layer (c) 5 Ã 5 standard convolutional layer (d) 5 Ã 5 single intra-channel convolutional layer
Figure 5. Visualization of convolutional kernels. We compare the 3 Ã 3 and 5 Ã 5 kernels that are learned by the proposed single intra- channel convolutional layer and the standard convolutional layer. The kernels from single intra-channel convolution exhibit a higher level of regularity in structure.
implementation like in CuDNN. The topological subdivi- sioning layer resembles the structure of 2D and 3D convo- lution.Unlike the sparsity based methods, the regular con- nection pattern from topological subdivisioning makes ef- ï¬cient implementation possible. Currently, our implemen- tation simply discards all the non-connected weights in a convolutional layer.
# 4. Conclusion | 1608.04337#35 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 36 | # 4. Conclusion
This work introduces a novel design of efï¬cient convo- lutional layer in deep CNN that involves three speciï¬c im- provements: (i) a single intra-channel convolutional (SIC) layer ; (ii) a topological subdivision scheme; and (iii) a spa- tial âbottleneckâ structure. As we demonstrated, they are all powerful schemes in different ways to yield a new design of the convolutional layer that has higher efï¬ciency, while achieving equal or better accuracy compared to classical | 1608.04337#36 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 37 | designs. While the numbers of input and output channels remain the same as in the classical models, both the con- volutions and the number of connections can be optimized against accuracy in our model - (i) reduces complexity by unraveling convolution, (ii) uses topology to make connec- tions in the convolutional layer sparse, while maintaining local regularity and (iii) uses a conv-deconv bottleneck to reduce convolution while maintaining resolution. Although the CNN have been exceptionally successful regarding the recognition accuracy, it is still not clear what architecture is optimal and learns the visual information most effectively. The methods presented herein attempt to answer this ques- tion by focusing on improving the efï¬ciency of the convolu- tional layer. We believe this work will inspire more compre- hensive studies in the direction of optimizing convolutional layers in deep CNN.
# References
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012. 1, 6 | 1608.04337#37 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 38 | [2] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Ser- manet, Scott Reed, Dragomir Anguelov, Dumitru Er- han, Vincent Vanhoucke, and Andrew Rabinovich. arXiv preprint Going deeper with convolutions. arXiv:1409.4842, 2014. 1
Batch nor- malization: Accelerating deep network training by arXiv preprint reducing internal covariate shift. arXiv:1502.03167, 2015. 1, 5
[4] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015. 1
[5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. 1, 7 | 1608.04337#38 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 39 | [6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchi- cal image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248â255. IEEE, 2009. 1
[7] Emily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efï¬cient evaluation. In Advances in Neural Information Processing Sys- tems, 2014. 1
[8] Max Jaderberg, Andrea Vedaldi, and Andrew Zisser- man. Speeding up convolutional neural networks with low rank expansions. In Proc. BMVC, 2014. 1
[9] Xiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, and Jian Sun. Efï¬cient and accurate approxima- tions of nonlinear convolutional networks. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1984â1992, 2015. 1 | 1608.04337#39 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 40 | [10] Yani Ioannou, Duncan Robertson, Jamie Shotton, Roberto Cipolla, and Antonio Criminisi. Training cnns with low-rank ï¬lters for efï¬cient image classi- ï¬cation. arXiv preprint arXiv:1511.06744, 2015. 1
[11] Cheng Tai, Tong Xiao, Xiaogang Wang, et al. Convo- lutional neural networks with low-rank regularization. arXiv preprint arXiv:1511.06067, 2015. 1
[12] Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse convolutional In Proceedings of the IEEE Con- neural networks. ference on Computer Vision and Pattern Recognition, pages 806â814, 2015. 1, 3
[13] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2, 2015. 1
[14] Aapo Hyv¨arinen, Patrik Hoyer, and Mika Inki. To- pographic independent component analysis. Neural computation, 13(7):1527â1558, 2001. 3 | 1608.04337#40 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.04337 | 41 | [15] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from over- ï¬tting. The Journal of Machine Learning Research, 15(1):1929â1958, 2014. 5
[16] Sharan Chetlur, Cliff Woolley, Philippe Vandermer- sch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. cudnn: Efï¬cient primitives for deep learning. arXiv preprint arXiv:1410.0759, 2014. 5
[17] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Jonathan Long, Ross Girshick, Sergio Karayev, Guadarrama, and Trevor Darrell. Caffe: Convolu- tional architecture for fast feature embedding. In Proceedings of the ACM International Conference on Multimedia, pages 675â678. ACM, 2014. 8
[18] Sam Gross and Michael Wilber. Resnet training in https://github.com/charlespwd/ torch. project-title, 2016. 8 | 1608.04337#41 | Design of Efficient Convolutional Layers using Single Intra-channel Convolution, Topological Subdivisioning and Spatial "Bottleneck" Structure | Deep convolutional neural networks achieve remarkable visual recognition
performance, at the cost of high computational complexity. In this paper, we
have a new design of efficient convolutional layers based on three schemes. The
3D convolution operation in a convolutional layer can be considered as
performing spatial convolution in each channel and linear projection across
channels simultaneously. By unravelling them and arranging the spatial
convolution sequentially, the proposed layer is composed of a single
intra-channel convolution, of which the computation is negligible, and a linear
channel projection. A topological subdivisioning is adopted to reduce the
connection between the input channels and output channels. Additionally, we
also introduce a spatial "bottleneck" structure that utilizes a
convolution-projection-deconvolution pipeline to take advantage of the
correlation between adjacent pixels in the input. Our experiments demonstrate
that the proposed layers remarkably outperform the standard convolutional
layers with regard to accuracy/complexity ratio. Our models achieve similar
accuracy to VGG, ResNet-50, ResNet-101 while requiring 42, 4.5, 6.5 times less
computation respectively. | http://arxiv.org/pdf/1608.04337 | Min Wang, Baoyuan Liu, Hassan Foroosh | cs.CV | null | null | cs.CV | 20160815 | 20170124 | [
{
"id": "1502.03167"
},
{
"id": "1511.06744"
},
{
"id": "1511.06067"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
}
] |
1608.03983 | 0 | 7 1 0 2
y a M 3 ] G L . s c [
5 v 3 8 9 3 0 . 8 0 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# SGDR: STOCHASTIC GRADIENT DESCENT WITH WARM RESTARTS
Ilya Loshchilov & Frank Hutter University of Freiburg Freiburg, Germany, {ilya,fh}@cs.uni-freiburg.de
# ABSTRACT
Restart techniques are common in gradient-free optimization to deal with multi- modal functions. Partial warm restarts are also gaining popularity in gradient- based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a sim- ple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its per- formance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at https://github.com/loshchil/SGDR
# INTRODUCTION | 1608.03983#0 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 1 | # INTRODUCTION
Deep neural networks (DNNs) are currently the best-performing method for many classiï¬cation problems, such as object recognition from images (Krizhevsky et al., 2012a; Donahue et al., 2014) or speech recognition from audio data (Deng et al., 2013). Their training on large datasets (where DNNs perform particularly well) is the main computational bottleneck: it often requires several days, even on high-performance GPUs, and any speedups would be of substantial value.
The training of a DNN with n free parameters can be formulated as the problem of minimizing a function f : IRn â IR. The commonly used procedure to optimize f is to iteratively adjust xt â IRn (the parameter vector at time step t) using gradient information âft(xt) obtained on a relatively small t-th batch of b datapoints. The Stochastic Gradient Descent (SGD) procedure then becomes an extension of the Gradient Descent (GD) to stochastic optimization of f as follows:
xt+1 = xt â ηtâft(xt), (1)
where ηt is a learning rate. One would like to consider second-order information
xt+1 = xt â ηtHâ1 t âft(xt), (2) | 1608.03983#1 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 2 | xt+1 = xt â ηtHâ1 t âft(xt), (2)
but this is often infeasible since the computation and storage of the inverse Hessian Hâ1 is in- tractable for large n. The usual way to deal with this problem by using limited-memory quasi- Newton methods such as L-BFGS (Liu & Nocedal, 1989) is not currently in favor in deep learning, not the least due to (i) the stochasticity of âft(xt), (ii) ill-conditioning of f and (iii) the presence of saddle points as a result of the hierarchical geometric structure of the parameter space (Fukumizu & Amari, 2000). Despite some recent progress in understanding and addressing the latter problems (Bordes et al., 2009; Dauphin et al., 2014; Choromanska et al., 2014; Dauphin et al., 2015), state-of- the-art optimization techniques attempt to approximate the inverse Hessian in a reduced way, e.g., by considering only its diagonal to achieve adaptive learning rates. AdaDelta (Zeiler, 2012) and Adam (Kingma & Ba, 2014) are notable examples of such methods.
1
Published as a conference paper at ICLR 2017
Learning rate schedule | 1608.03983#2 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 3 | 1
Published as a conference paper at ICLR 2017
Learning rate schedule
10 âO-â Default, Ir=0.1 Eb Default, ir=0.05 wb - B= T= 50, Tut T,=100,T â j ° mult 2 | \ T= 200, © 10 pe Th 1 Tha > mul ⬠3 A T= 10, Traut § 10 eal 10° i ] ! fi fi f } 20 40 60 80 100 120 140 160 180 200 Epochs
1
=1 Ty = 1 = 2
=
2
Figure 1: Alternative schedule schemes of learning rate ηt over batch index t: default schemes with η0 = 0.1 (blue line) and η0 = 0.05 (red line) as used by Zagoruyko & Komodakis (2016); warm restarts simulated every T0 = 50 (green line), T0 = 100 (black line) and T0 = 200 (grey line) epochs with ηt decaying during i-th run from ηi min = 0 according to eq. (5); warm restarts starting from epoch T0 = 1 (dark green line) and T0 = 10 (magenta line) with doubling (Tmult = 2) periods Ti at every new warm restart. | 1608.03983#3 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 4 | Intriguingly enough, the current state-of-the-art results on CIFAR-10, CIFAR-100, SVHN, Ima- geNet, PASCAL VOC and MS COCO datasets were obtained by Residual Neural Networks (He et al., 2015; Huang et al., 2016c; He et al., 2016; Zagoruyko & Komodakis, 2016) trained with- out the use of advanced methods such as AdaDelta and Adam. Instead, they simply use SGD with momentum 1:
vt+1 = µtvt â ηtâft(xt), xt+1 = xt + vt+1,
Vexr = ee â MV file), (3)
Xeq1 = Xe + Vega, (4)
(3) (4) | 1608.03983#4 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 5 | Vexr = ee â MV file), (3)
Xeq1 = Xe + Vega, (4)
(3) (4)
where vt is a velocity vector initially set to 0, ηt is a decreasing learning rate and µt is a momentum rate which deï¬nes the trade-off between the current and past observations of âft(xt). The main difï¬culty in training a DNN is then associated with the scheduling of the learning rate and the amount of L2 weight decay regularization employed. A common learning rate schedule is to use a constant learning rate and divide it by a ï¬xed constant in (approximately) regular intervals. The blue line in Figure 1 shows an example of such a schedule, as used by Zagoruyko & Komodakis (2016) to obtain the state-of-the-art results on CIFAR-10, CIFAR-100 and SVHN datasets. | 1608.03983#5 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 6 | In this paper, we propose to periodically simulate warm restarts of SGD, where in each restart the learning rate is initialized to some value and is scheduled to decrease. Four different instantiations of this new learning rate schedule are visualized in Figure 1. Our empirical results suggest that SGD with warm restarts requires 2Ã to 4Ã fewer epochs than the currently-used learning rate schedule schemes to achieve comparable or even better results. Furthermore, combining the networks ob- tained right before restarts in an ensemble following the approach proposed by Huang et al. (2016a) improves our results further to 3.14% for CIFAR-10 and 16.21% for CIFAR-100. We also demon- strate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset.
1More speciï¬cally, they employ Nesterovâs momentum (Nesterov, 1983; 2013)
2
Published as a conference paper at ICLR 2017
2 RELATED WORK
2.1 RESTARTS IN GRADIENT-FREE OPTIMIZATION | 1608.03983#6 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 7 | When optimizing multimodal functions one may want to ï¬nd all global and local optima. The tractability of this task depends on the landscape of the function at hand and the budget of func- tion evaluations. Gradient-free optimization approaches based on niching methods (Preuss, 2015) usually can deal with this task by covering the search space with dynamically allocated niches of local optimizers. However, these methods usually work only for relatively small search spaces, e.g., n < 10, and do not scale up due to the curse of dimensionality (Preuss, 2010). Instead, the current state-of-the-art gradient-free optimizers employ various restart mechanisms (Hansen, 2009; Loshchilov et al., 2012). One way to deal with multimodal functions is to iteratively sample a large number λ of candidate solutions, make a step towards better solutions and slowly shape the sampling distribution to maximize the likelihood of successful steps to appear again (Hansen & Kern, 2004). The larger the λ, the more global search is performed requiring more function evaluations. In order to achieve good anytime performance, it is common to start with a small λ and increase it (e.g., by | 1608.03983#7 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 8 | search is performed requiring more function evaluations. In order to achieve good anytime performance, it is common to start with a small λ and increase it (e.g., by doubling) after each restart. This approach works best on multimodal functions with a global funnel structure and also improves the results on ill-conditioned problems where numerical issues might lead to premature convergence when λ is small (Hansen, 2009). | 1608.03983#8 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 10 | Gradient-based optimization algorithms such as BFGS can also perform restarts to deal with mul- timodal functions (Ros, 2009). In large-scale settings when the usual number of variables n is on the order of 103 â 109, the availability of gradient information provides a speedup of a factor of n w.r.t. gradient-free approaches. Warm restarts are usually employed to improve the convergence rate rather than to deal with multimodality: often it is sufï¬cient to approach any local optimum to a given precision and in many cases the problem at hand is unimodal. Fletcher & Reeves (1964) proposed to ï¬esh the history of conjugate gradient method every n or (n + 1) iterations. Powell (1977) proposed to check whether enough orthogonality between âf (xtâ1) and âf (xt) has been lost to warrant another warm restart. Recently, OâDonoghue & Candes (2012) noted that the iterates of accelerated gradient schemes proposed by Nesterov (1983; 2013) exhibit a periodic behavior if momentum is overused. The period of the oscillations is proportional to the square root of the local condition number of the (smooth | 1608.03983#10 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 11 | exhibit a periodic behavior if momentum is overused. The period of the oscillations is proportional to the square root of the local condition number of the (smooth convex) objective function. The authors showed that ï¬xed warm restarts of the algorithm with a period proportional to the conditional number achieves the optimal linear convergence rate of the original accelerated gradient scheme. Since the condition number is an unknown parameter and its value may vary during the search, they proposed two adaptive warm restart techniques (OâDonoghue & Candes, 2012): | 1608.03983#11 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 12 | The function scheme restarts whenever the objective function increases.
⢠The gradient scheme restarts whenever the angle between the momentum term and the negative gradient is obtuse, i.e, when the momentum seems to be taking us in a bad direc- tion, as measured by the negative gradient at that point. This scheme resembles the one of Powell (1977) for the conjugate gradient method.
OâDonoghue & Candes (2012) showed (and it was conï¬rmed in a set of follow-up works) that these simple schemes provide an acceleration on smooth functions and can be adjusted to accelerate state- of-the-art methods such as FISTA on nonsmooth functions.
Smith (2015; 2016) recently introduced cyclical learning rates for deep learning, his approach is closely-related to our approach in its spirit and formulation but does not focus on restarts.
Yang & Lin (2015) showed that Stochastic subGradient Descent with restarts can achieve a linear convergence rate for a class of non-smooth and non-strongly convex optimization problems where the epigraph of the objective function is a polyhedron. In contrast to our work, they never increase the learning rate to perform restarts but decrease it geometrically at each epoch. To perform restarts, they periodically reset the current solution to the averaged solution from the previous epoch.
3 | 1608.03983#12 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 13 | 3
Published as a conference paper at ICLR 2017
# 3 STOCHASTIC GRADIENT DESCENT WITH WARM RESTARTS (SGDR)
The existing restart techniques can also be used for stochastic gradient descent if the stochasticity is taken into account. Since gradients and loss values can vary widely from one batch of the data to another, one should denoise the incoming information: by considering averaged gradients and losses, e.g., once per epoch, the above-mentioned restart techniques can be used again.
In this work, we consider one of the simplest warm restart approaches. We simulate a new warm- started run / restart of SGD once Ti epochs are performed, where i is the index of the run. Impor- tantly, the restarts are not performed from scratch but emulated by increasing the learning rate ηt while the old value of xt is used as an initial solution. The amount of this increase controls to which extent the previously acquired information (e.g., momentum) is used.
Within the i-th run, we decay the learning rate with a cosine annealing for each batch as follows:
ηt = ηi min + 1 2 (ηi max â ηi min)(1 + cos( Tcur Ti Ï)), (5) | 1608.03983#13 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 14 | ηt = ηi min + 1 2 (ηi max â ηi min)(1 + cos( Tcur Ti Ï)), (5)
where ηi max are ranges for the learning rate, and Tcur accounts for how many epochs have been performed since the last restart. Since Tcur is updated at each batch iteration t, it can take discredited values such as 0.1, 0.2, etc. Thus, ηt = ηi max when t = 0 and Tcur = 0. Once Tcur = Ti, the cos function will output â1 and thus ηt = ηi min. The decrease of the learning rate is shown in Figure 1 for ï¬xed Ti = 50, Ti = 100 and Ti = 200; note that the logarithmic axis obfuscates the typical shape of the cosine function.
In order to improve anytime performance, we suggest an option to start with an initially small Ti and increase it by a factor of Tmult at every restart (see, e.g., Figure 1 for T0 = 1, Tmult = 2 and T0 = 10, Tmult = 2). It might be of great interest to decrease ηi min at every new restart. However, for the sake of simplicity, here, we keep ηi min the same for every i to reduce the number of hyperparameters involved. | 1608.03983#14 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 15 | Since our simulated warm restarts (the increase of the learning rate) often temporarily worsen per- formance, we do not always use the last xt as our recommendation for the best solution (also called the incumbent solution). While our recommendation during the ï¬rst run (before the ï¬rst restart) is indeed the last xt, our recommendation after this is a solution obtained at the end of the last per- formed run at ηt = ηi min. We emphasize that with the help of this strategy, our method does not require a separate validation data set to determine a recommendation.
# 4 EXPERIMENTAL RESULTS
4.1 EXPERIMENTAL SETTINGS
We consider the problem of training Wide Residual Neural Networks (WRNs; see Zagoruyko & Komodakis (2016) for details) on the CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009). We will use the abbreviation WRN-d-k to denote a WRN with depth d and width k. Zagoruyko & Komodakis (2016) obtained the best results with a WRN-28-10 architecture, i.e., a Residual Neural Network with d = 28 layers and k = 10 times more ï¬lters per layer than used in the original Residual Neural Networks (He et al., 2015; 2016). | 1608.03983#15 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 16 | The CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009) consist of 32Ã32 color images drawn from 10 and 100 classes, respectively, split into 50,000 train and 10,000 test images. For image preprocessing Zagoruyko & Komodakis (2016) performed global contrast normalization and ZCA whitening. For data augmentation they performed horizontal ï¬ips and random crops from the image padded by 4 pixels on each side, ï¬lling missing pixels with reï¬ections of the original image.
For training, Zagoruyko & Komodakis (2016) used SGD with Nesterovâs momentum with initial learning rate set to η0 = 0.1, weight decay to 0.0005, dampening to 0, momentum to 0.9 and minibatch size to 128. The learning rate is dropped by a factor of 0.2 at 60, 120 and 160 epochs, with a total budget of 200 epochs. We reproduce the results of Zagoruyko & Komodakis (2016) with the same settings except that i) we subtract per-pixel mean only and do not use ZCA whitening; ii) we use SGD with momentum as described by eq. (3-4) and not Nesterovâs momentum.
4
Published as a conference paper at ICLR 2017 | 1608.03983#16 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 17 | 4
Published as a conference paper at ICLR 2017
# WRN-28-10 on CIFAR-10
# WRN-28-10 on CIFAR-100
2
50
Default, Ir=0.1 Default, r=0.05 Ty = 50, Tra = 1 20 +> = 100, te 1 40 = Ty = 200, Try = 4 = = 15 PR T0= Trt =2 = 30 o o 8 10 8 20 F F 10 0 i} 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-10 on CIFAR-10 WRN-28-10 on CIFAR-100 5 21 20.5 45 Test error (%) ES Test error (%) oS a 19 3.5 18.5 3 18 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-20 on CIFAR-10 WRN-28-20 on CIFAR-100 5 21 y 20.5 45 20 Test error (%) ES Test error (%) oS a 19 3.5 18.5 3 18 50 100 150 200 50 100 150 200 Epochs Epochs | 1608.03983#17 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 18 | Figure 2: Test errors on CIFAR-10 (left column) and CIFAR-100 (right column) datasets. Note that for SGDR we only plot the recommended solutions. The top and middle rows show the same results on WRN-28-10, with the middle row zooming into the good performance region of low test error. The bottom row shows performance with a wider network, WRN-28-20. The results of the default learning rate schedules of Zagoruyko & Komodakis (2016) with η0 = 0.1 and η0 = 0.05 are depicted by the blue and red lines, respectively. The schedules of ηt used in SGDR are shown with i) restarts every T0 = 50 epochs (green line); ii) restarts every T0 = 100 epochs (black line); iii) restarts every T0 = 200 epochs (gray line); iv) restarts with doubling (Tmult = 2) periods of restarts starting from the ï¬rst epoch (T0 = 1, dark green line); and v) restarts with doubling (Tmult = 2) periods of restarts starting from the tenth epoch (T0 = 10, magenta line). | 1608.03983#18 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 20 | original-ResNet (He et al., 2015) stoc-depth (Huang et al., 2016c) pre-act-ResNet (He et al., 2016) WRN (Zagoruyko & Komodakis, 2016) depth-k 110 1202 110 1202 110 164 1001 16-8 28-10 28-10 28-10 28-10 28-10 28-10 28-10 28-10 28-10 28-20 28-20 28-20 28-20 28-20 28-20 28-20 # runs # params 1.7M mean of 5 10.2M mean of 5 1 run 1.7M 1 run 10.2M med. of 5 1.7M 1.7M med. of 5 10.2M med. of 5 11.0M 36.5M 36.5M 1 run 1 run 1 run 36.5M med. of 5 36.5M med. of 5 36.5M med. of 5 36.5M med. of 5 36.5M med. of 5 36.5M med. of 5 36.5M med. of 5 145.8M med. of 2 145.8M med. of 2 145.8M med. of 2 145.8M med. of 2 145.8M med. of 2 145.8M med. of 2 145.8M med. of | 1608.03983#20 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 22 | Table 1: Test errors of different methods on CIFAR-10 and CIFAR-100 with moderate data aug- mentation (ï¬ip/translation). In the second column k is a widening factor for WRNs. Note that the computational and memory resources used to train all WRN-28-10 are the same. In all other cases they are different, but WRNs are usually faster than original ResNets to achieve the same accuracy (e.g., up to a factor of 8 according to Zagoruyko & Komodakis (2016)). Bold text is used only to highlight better results and is not based on statistical tests (too few runs).
4.2 SINGLE-MODEL RESULTS
Table 1 shows that our experiments reproduce the results given by Zagoruyko & Komodakis (2016) for WRN-28-10 both on CIFAR-10 and CIFAR-100. These âdefaultâ experiments with η0 = 0.1 and η0 = 0.05 correspond to the blue and red lines in Figure 2. The results for η0 = 0.05 show better performance, and therefore we use η0 = 0.05 in our later experiments. | 1608.03983#22 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 23 | SGDR with T0 = 50, T0 = 100 and T0 = 200 for Tmult = 1 perform warm restarts every 50, 100 and 200 epochs, respectively. A single run of SGD with the schedule given by eq. (5) for T0 = 200 shows the best results suggesting that the original schedule of WRNs might be suboptimal w.r.t. the test error in these settings. However, the same setting with T0 = 200 leads to the worst anytime performance except for the very last epochs.
SGDR with T0 = 1, Tmult = 2 and T0 = 10, Tmult = 2 performs its ï¬rst restart after 1 and 10 epochs, respectively. Then, it doubles the maximum number of epochs for every new restart. The main purpose of this doubling is to reach good test error as soon as possible, i.e., achieve good anytime performance. Figure 2 shows that this is achieved and test errors around 4% on CIFAR-10 and around 20% on CIFAR-100 can be obtained about 2-4 times faster than with the default schedule used by Zagoruyko & Komodakis (2016).
6
Published as a conference paper at ICLR 2017 | 1608.03983#23 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 24 | 6
Published as a conference paper at ICLR 2017
Median test error (%) of ensembles on CIFAR-10 39 38 37 36 35 34 33 3.2 8 16 (N) FS (M) Number of snapshots per run 174.03% 3.63% 1 2 3 4 Number of runs
Median test error (%) of ensembles on CIFAR-100 1/19.57% 18.16% Number of snapshots per run (M) nN 2 nN a aS 1 2 8 16 3 4 Number of runs (N)
39 38 37 36 35 34 33 3.2 8 16 (N) FS (M) 1/19.57% 18.16% Number of snapshots per run Number of snapshots per run (M) nN 2 nN a aS 174.03% 3.63% 1 2 3 4 1 2 8 16 3 4 Number of runs Number of runs (N)
Figure 3: Test errors of ensemble models built from N runs of SGDR on WRN-28-10 with M model snapshots per run made at epochs 150, 70 and 30 (right before warm restarts of SGDR as suggested by Huang et al. (2016a)). When M =1 (respectively, M =2), we aggregate probabilities of softmax layers of snapshot models at epoch index 150 (respectively, at epoch indexes 150 and 70). | 1608.03983#24 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 26 | Since SGDR achieves good performance faster, it may allow us to train larger networks. We there- fore investigated whether results on CIFAR-10 and CIFAR-100 can be further improved by making WRNs two times wider, i.e., by training WRN-28-20 instead of WRN-28-10. Table 1 shows that the results indeed improved, by about 0.25% on CIFAR-10 and by about 0.5-1.0% on CIFAR-100. While network architecture WRN-28-20 requires roughly three-four times more computation than WRN-28-10, the aggressive learning rate reduction of SGDR nevertheless allowed us to achieve a better error rate in the same time on WRN-28-20 as we spent on 200 epochs of training on WRN- 28-10. Speciï¬cally, Figure 2 (right middle and right bottom) show that after only 50 epochs, SGDR (even without restarts, using T0 = 50, Tmult = 1) achieved an error rate below 19% (whereas none of the other learning methods performed better than 19.5% on WRN-28-10). We therefore have hope that â by enabling researchers to test new architectures faster â SGDRâs good anytime performance may also lead to improvements of the state of the art. | 1608.03983#26 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 27 | In a ï¬nal experiment for SGDR by itself, Figure 7 in the appendix compares SGDR and the de- fault schedule with respect to training and test performance. As the ï¬gure shows, SGDR optimizes training loss faster than the standard default schedule until about epoch 120. After this, the default schedule overï¬ts, as can be seen by an increase of the test error both on CIFAR-10 and CIFAR-100 (see, e.g., the right middle plot of Figure 7). In contrast, we only witnessed very mild overï¬tting for SGDR.
4.3 ENSEMBLE RESULTS
Our initial arXiv report on SGDR (Loshchilov & Hutter, 2016) inspired a follow-up study by Huang et al. (2016a) in which the authors suggest to take M snapshots of the models obtained by SGDR (in their paper referred to as cyclical learning rate schedule and cosine annealing cycles) right before M last restarts and to use those to build an ensemble, thereby obtaining ensembles âfor freeâ (in contrast to having to perform multiple independent runs). The authors demonstrated new state-of7
Published as a conference paper at ICLR 2017 | 1608.03983#27 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 29 | Figure 3 and Table 2 aggregate the results of our study. The original test error of 4.03% on CIFAR-10 and 19.57% on CIFAR-100 (median of 16 runs) can be improved to 3.51% on CIFAR-10 and 17.75% on CIFAR-100 when M = 3 snapshots are taken at epochs 30, 70 and 150: when the learning rate of SGDR with T0 = 10, Tmult = 2 is scheduled to achieve 0 (see Figure 1) and the models are used with uniform weights to build an ensemble. To achieve the same result, one would have to aggregate N = 3 models obtained at epoch 150 of N = 3 independent runs (see N = 3, M = 1 in Figure 3). Thus, the aggregation from snapshots provides a 3-fold speedup in these settings because additional (M > 1-th) snapshots from a single SGDR run are computationally free. Interestingly, aggregation of models from independent runs (when N > 1 and M = 1) does not scale up as well as from M > 1 snapshots of independent runs when the same number of models is considered: the case of N = 3 and M = 3 provides better performance than the cases of M = 1 with N = 18 and N = 21. Not only the | 1608.03983#29 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 30 | number of models is considered: the case of N = 3 and M = 3 provides better performance than the cases of M = 1 with N = 18 and N = 21. Not only the number of snapshots M per run but also their origin is crucial. Thus, naively building ensembles from models obtained at last epochs only (i.e., M = 3 snapshots at epochs 148, 149, 150) did not improve the results (i.e., the baseline of M = 1 snapshot at 150) thereby conï¬rming the conclusion of Huang et al. (2016a) that snapshots of SGDR provide a useful diversity of predictions for ensembles. | 1608.03983#30 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 31 | Three runs (N = 3) of SGDR with M = 3 snapshots per run are sufï¬cient to greatly improve the results to 3.25% on CIFAR-10 and 16.64% on CIFAR-100 outperforming the results of Huang et al. (2016a). By increasing N to 16 one can achieve 3.14% on CIFAR-10 and 16.21% on CIFAR-100. We believe that these results could be further improved by considering better baseline models than WRN-28-10 (e.g., WRN-28-20).
4.4 EXPERIMENTS ON A DATASET OF EEG RECORDINGS | 1608.03983#31 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 32 | 4.4 EXPERIMENTS ON A DATASET OF EEG RECORDINGS
To demonstrate the generality of SGDR, we also considered a very different domain: a dataset of electroencephalographic (EEG) recordings of brain activity for classiï¬cation of actual right and left hand and foot movements of 14 subjects with roughly 1000 trials per subject (Schirrmeister et al., 2017). The best classiï¬cation results obtained with the original pipeline based on convolutional neu- ral networks designed by Schirrmeister et al. (2017) were used as our reference. First, we compared the baseline learning rate schedule with different settings of the total number of epochs and initial learning rates (see Figure 4). When 30 epochs were considered, we dropped the learning rate by a factor of 10 at epoch indexes 10, 15 and 20. As expected, with more epochs used and a similar (budget proportional) schedule better results can be achieved. Alternatively, one can consider SGDR and get a similar ï¬nal performance while having a better anytime performance without deï¬ning the total budget of epochs beforehand. | 1608.03983#32 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 33 | Similarly to our results on the CIFAR datasets, our experiments with the EEG data conï¬rm that snapshots are useful and the median reference error (about 9%) can be improved i) by 1-2% when model snapshots of a single run are considered, and ii) by 2-3% when model snapshots from both hyperparameter settings are considered. The latter would correspond to N = 2 in Section (4.3).
4.5 PRELIMINARY EXPERIMENTS ON A DOWNSAMPLED IMAGENET DATASET | 1608.03983#33 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 34 | 4.5 PRELIMINARY EXPERIMENTS ON A DOWNSAMPLED IMAGENET DATASET
In order to additionally validate our SGDR on a larger dataset, we constructed a downsampled version of the ImageNet dataset [P. Chrabaszcz, I. Loshchilov and F. Hutter. A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets., in preparation]. In contrast to earlier attempts (Pouransari & Ghili, 2015), our downsampled ImageNet contains exactly the same images from 1000 classes as the original ImageNet but resized with box downsampling to 32 à 32 pixels. Thus, this dataset is substantially harder than the original ImageNet dataset because the average number of pixels per image is now two orders of magnitude smaller. The new dataset is also more difï¬cult than the CIFAR datasets because more classes are used and the relevant objects to be classiï¬ed often cover only a tiny subspace of the image and not most of the image as in the CIFAR datasets.
We benchmarked SGD with momentum with the default learning rate schedule, SGDR with T0 = 1, Tmult = 2 and SGDR with T0 = 10, Tmult = 2 on WRN-28-10, all trained with 4 settings of
8
Published as a conference paper at ICLR 2017 | 1608.03983#34 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 35 | 8
Published as a conference paper at ICLR 2017
Median Results on 14 datasets, Ir=0.025 is baseline n, =30 ep baseline n, =60 ep baseline n, =120 ep v So baseline Nyp=240 baseline n, =480 ep Test Error - Reference Error (%) â sGpR -2 10' 10° 10° Epochs
Median Results on 14 datasets, Ir=0.05 is baseline n, =30 ep baseline n, =60 ep baseline n, =120 ep Nv o baseline Nyp=240 baseline n, =480 ep Test Error - Reference Error (%) o ââ sGpR 4 2 te) -2 10' 10° 10° Epochs | 1608.03983#35 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 36 | is is baseline n, =30 ep baseline n, =60 ep baseline n, =120 ep baseline n, =30 ep baseline n, =60 ep baseline n, =120 ep v Nv So o baseline Nyp=240 baseline n, =480 ep baseline Nyp=240 baseline n, =480 ep Test Error - Reference Error (%) Test Error - Reference Error (%) o â sGpR ââ sGpR 4 2 te) -2 -2 10' 10° 10° 10' 10° 10° Epochs Epochs Median Results on 14 datasets Mean Results on 14 datasets 3 3 i) Test Error - Reference Error (%) é oO Test Error : Reference Error (%) ° 2 2 a 2 3 3 a 2 3 10 10 10 10 10 10 Epochs Epochs
Median Results on 14 datasets 3 Test Error - Reference Error (%) é oO 2 a 2 3 10 10 10 Epochs
Mean Results on 14 datasets 3 i) Test Error : Reference Error (%) ° 2 3 a 2 3 10 10 10 Epochs | 1608.03983#36 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 37 | Mean Results on 14 datasets 3 i) Test Error : Reference Error (%) ° 2 3 a 2 3 10 10 10 Epochs
Figure 4: (Top) Improvements obtained by the baseline learning rate schedule and SGDR w.r.t. the best known reference classiï¬cation error on a dataset of electroencephalographic (EEG) recordings of brain activity for classiï¬cation of actual right and left hand and foot movements of 14 subjects with roughly 1000 trials per subject. Both considered approaches were tested with the initial learn- ing rate lr = 0.025 (Top-Left) and lr = 0.05 (Top-Right). Note that the baseline approach is considered with different settings of the total number of epochs: 30, 60, . . ., 480. (Bottom) SGDR with lr = 0.025 and lr = 0.05 without and with M model snapshots taken at the last M = nr/2 restarts, where nr is the total number of restarts. | 1608.03983#37 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 38 | the initial learning rate ηi max: 0.050, 0.025, 0.01 and 0.005. We used the same data augmentation procedure as for the CIFAR datasets. Similarly to the results on the CIFAR datasets, Figure 5 shows that SGDR demonstrates better anytime performance. SGDR with T0 = 10, Tmult = 2, ηi max = 0.01 achieves top-1 error of 39.24% and top-5 error of 17.17% matching the original results by AlexNets (40.7% and 18.2%, respectively) obtained on the original ImageNet with full-size images of ca. 50 times more pixels per image (Krizhevsky et al., 2012b). Interestingly, when the dataset is permuted only within 10 subgroups each formed from 100 classes, SGDR also demonstrates better results (see Figure 8 in the Supplementary Material). An interpretation of this might be that while the initial learning rate seems to be very important, SGDR reduces the problem of improper selection of the latter by scanning / annealing from the initial learning rate to 0.
Clearly, longer runs (more than 40 epochs considered in this preliminary experiment) and hyperpa- rameter tuning of learning rates, regularization and other hyperparameters shall further improve the results.
9
Published as a conference paper at ICLR 2017 | 1608.03983#38 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 39 | 9
Published as a conference paper at ICLR 2017
WRN-28-10 on downsampled 32x32 ImageNet WRN-28-10 on downsampled 32x32 ImageNet 60 Default Default SGDR T= 1, Typ =2 SGDR Ty = 1, Typ = 2 55 SGDR T, = 10,7, SGDR T, = 10, T,., £50 & o GB 2 45 ire) b & ° F 40 35
# o GB 2
~~
# ° F
Figure 5: Top-1 and Top-5 test errors obtained by SGD with momentum with the default learning rate schedule, SGDR with T0 = 1, Tmult = 2 and SGDR with T0 = 10, Tmult = 2 on WRN-28-10 trained on a version of ImageNet, with all images from all 1000 classes downsampled to 32 Ã 32 pixels. The same baseline data augmentation as for the CIFAR datasets is used. Four settings of the initial learning rate are considered: 0.050, 0.025, 0.01 and 0.005.
# 5 DISCUSSION | 1608.03983#39 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 40 | # 5 DISCUSSION
Our results suggest that even without any restarts the proposed aggressive learning rate schedule given by eq. (5) is competitive w.r.t. the default schedule when training WRNs on the CIFAR- 10 (e.g., for T0 = 200, Tmult = 1) and CIFAR-100 datasets. In practice, the proposed schedule requires only two hyper-parameters to be deï¬ned: the initial learning rate and the total number of epochs.
We found that the anytime performance of SGDR remain similar when shorter epochs are considered (see section 8.1 in the Supplemenary Material).
One should not suppose that the parameter values used in this study and many other works with (Residual) Neural Networks are selected to demonstrate the fastest decrease of the training error. Instead, the best validation or / and test errors are in focus. Notably, the validation error is rarely used when training Residual Neural Networks because the recommendation is deï¬ned by the ï¬nal solution (in our approach, the ï¬nal solution of each run). One could use the validation error to determine the optimal initial learning rate and then run on the whole dataset; this could further improve results. | 1608.03983#40 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 41 | The main purpose of our proposed warm restart scheme for SGD is to improve its anytime perfor- mance. While we mentioned that restarts can be useful to deal with multi-modal functions, we do not claim that we observe any effect related to multi-modality. As we noted earlier, one could decrease ηi max and ηi min at every new warm restart to control the amount of divergence. If new restarts are worse than the old ones w.r.t. validation error, then one might also consider going back to the last best solution and perform a new restart with adjusted hyperparameters.
Our results reproduce the ï¬nding by Huang et al. (2016a) that intermediate models generated by SGDR can be used to build efï¬cient ensembles at no cost. This ï¬nding makes SGDR especially attractive for scenarios when ensemble building is considered.
# 6 CONCLUSION
In this paper, we investigated a simple warm restart mechanism for SGD to accelerate the training of DNNs. Our SGDR simulates warm restarts by scheduling the learning rate to achieve competitive results on CIFAR-10 and CIFAR-100 roughly two to four times faster. We also achieved new state- of-the-art results with SGDR, mainly by using even wider WRNs and ensembles of snapshots from
10
Published as a conference paper at ICLR 2017 | 1608.03983#41 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 42 | 10
Published as a conference paper at ICLR 2017
SGDRâs trajectory. Future empirical studies should also consider the SVHN, ImageNet and MS COCO datasets, for which Residual Neural Networks showed the best results so far. Our preliminary results on a dataset of EEG recordings suggest that SGDR delivers better and better results as we carry out more restarts and use more model snapshots. The results on our downsampled ImageNet dataset suggest that SGDR might also reduce the problem of learning rate selection because the annealing and restarts of SGDR scan / consider a range of learning rate values. Future work should consider warm restarts for other popular training algorithms such as AdaDelta (Zeiler, 2012) and Adam (Kingma & Ba, 2014).
Alternative network structures should be also considered; e.g., soon after our initial arXiv report (Loshchilov & Hutter, 2016), Zhang et al. (2016); Huang et al. (2016b); Han et al. (2016) reported that WRNs models can be replaced by more memory-efï¬cient models. Thus, it should be tested whether our results for individual models and ensembles can be further improved by using their networks instead of WRNs. Deep compression methods (Han et al., 2015) can be used to reduce the time and memory costs of DNNs and their ensembles.
# 7 ACKNOWLEDGMENTS | 1608.03983#42 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 43 | # 7 ACKNOWLEDGMENTS
This work was supported by the German Research Foundation (DFG), under the BrainLinksBrain- Tools Cluster of Excellence (grant number EXC 1086). We thank Gao Huang, Kilian Quirin Wein- berger, Jost Tobias Springenberg, Mark Schmidt and three anonymous reviewers for their helpful comments and suggestions. We thank Robin Tibor Schirrmeister for providing his pipeline for the EEG experiments and helping integrating SGDR.
# REFERENCES
Antoine Bordes, L´eon Bottou, and Patrick Gallinari. Sgd-qn: Careful quasi-newton stochastic gra- dient descent. The Journal of Machine Learning Research, 10:1737â1754, 2009.
Anna Choromanska, Mikael Henaff, Michael Mathieu, G´erard Ben Arous, and Yann LeCun. The loss surface of multilayer networks. arXiv preprint arXiv:1412.0233, 2014.
Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op- timization. In Advances in Neural Information Processing Systems, pp. 2933â2941, 2014. | 1608.03983#43 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 44 | Yann N Dauphin, Harm de Vries, Junyoung Chung, and Yoshua Bengio. Rmsprop and equilibrated adaptive learning rates for non-convex optimization. arXiv preprint arXiv:1502.04390, 2015.
L. Deng, G. Hinton, and B. Kingsbury. New types of deep neural network learning for speech recognition and related applications: An overview. In Proc. of ICASSPâ13, 2013.
J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In Proc. of ICMLâ14, 2014.
Reeves Fletcher and Colin M Reeves. Function minimization by conjugate gradients. The computer journal, 7(2):149â154, 1964.
Kenji Fukumizu and Shun-ichi Amari. Local minima and plateaus in hierarchical structures of multilayer perceptrons. Neural Networks, 13(3):317â327, 2000.
Dongyoon Han, Jiwhan Kim, and Junmo Kim. Deep pyramidal residual networks. arXiv preprint arXiv:1610.02915, 2016. | 1608.03983#44 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 45 | Dongyoon Han, Jiwhan Kim, and Junmo Kim. Deep pyramidal residual networks. arXiv preprint arXiv:1610.02915, 2016.
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
Nikolaus Hansen. Benchmarking a BI-population CMA-ES on the BBOB-2009 function testbed. In Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computa- tion Conference: Late Breaking Papers, pp. 2389â2396. ACM, 2009.
11
Published as a conference paper at ICLR 2017
Nikolaus Hansen and Stefan Kern. Evaluating the cma evolution strategy on multimodal test functions. In International Conference on Parallel Problem Solving from Nature, pp. 282â291. Springer, 2004.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. arXiv preprint arXiv:1512.03385, 2015. | 1608.03983#45 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 46 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016.
Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, and Kilian Q. Weinberger. Snapshot ensembles: Train 1, get m for free. ICLR 2017 submission, 2016a.
Gao Huang, Zhuang Liu, and Kilian Q Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016b.
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochas- tic depth. arXiv preprint arXiv:1603.09382, 2016c.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Proc. of NIPSâ12, pp. 1097â1105, 2012a. | 1608.03983#46 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 47 | Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012b.
Dong C Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(1-3):503â528, 1989.
Ilya Loshchilov and Frank Hutter. SGDR: Stochastic Gradient Descent with Restarts. arXiv preprint arXiv:1608.03983, 2016.
Ilya Loshchilov, Marc Schoenauer, and Michele Sebag. Alternative restart strategies for CMA-ES. In International Conference on Parallel Problem Solving from Nature, pp. 296â305. Springer, 2012.
Yurii Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2). In Soviet Mathematics Doklady, volume 27, pp. 372â376, 1983.
Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2013. | 1608.03983#47 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 48 | Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2013.
Brendan OâDonoghue and Emmanuel Candes. Adaptive restart for accelerated gradient schemes. arXiv preprint arXiv:1204.3982, 2012.
Hadi Pouransari and Saman Ghili. Tiny imagenet visual recognition challenge. CS231 course at STANFORD, 2015.
Michael James David Powell. Restart procedures for the conjugate gradient method. Mathematical programming, 12(1):241â254, 1977.
Mike Preuss. Niching the CMA-ES via nearest-better clustering. In Proceedings of the 12th annual conference companion on Genetic and evolutionary computation, pp. 1711â1718. ACM, 2010.
Mike Preuss. Niching methods and multimodal optimization performance. In Multimodal Optimiza- tion by Means of Evolutionary Algorithms, pp. 115â137. Springer, 2015.
Raymond Ros. Benchmarking the bfgs algorithm on the bbob-2009 function testbed. In Proceed- ings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Con- ference: Late Breaking Papers, pp. 2409â2414. ACM, 2009.
12
Published as a conference paper at ICLR 2017 | 1608.03983#48 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 49 | 12
Published as a conference paper at ICLR 2017
Robin Tibor Schirrmeister, Jost Tobias Springenberg, Lukas Dominique Josef Fiederer, Martin Glasstetter, Katharina Eggensperger, Michael Tangermann, Frank Hutter, Wolfram Burgard, and Tonio Ball. Deep learning with convolutional neural networks for brain mapping and decoding of movement-related information from the human eeg. arXiv preprint arXiv:1703.05051, 2017.
Leslie N Smith. No more pesky learning rate guessing games. arXiv preprint arXiv:1506.01186, 2015.
Leslie N Smith. Cyclical arXiv:1506.01186v3, 2016. learning rates for training neural networks. arXiv preprint
Tianbao Yang and Qihang Lin. Stochastic subgradient methods with linear convergence for polyhe- dral convex optimization. arXiv preprint arXiv:1510.01444, 2015.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
Matthew D Zeiler. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012. | 1608.03983#49 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 50 | Matthew D Zeiler. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
K. Zhang, M. Sun, T. X. Han, X. Yuan, L. Guo, and T. Liu. Residual Networks of Residual Net- works: Multilevel Residual Networks. ArXiv e-prints, August 2016.
13
Published as a conference paper at ICLR 2017
# 8 SUPPLEMENTARY MATERIAL
CIFAR-10
30 Default ââ SGDR 25 R 8 Test error (%) a 0 20 40 60 80 100 Epochs
Figure 6: The median results of 5 runs for the best learning rate settings considered for WRN-28-1.
50K VS 100K EXAMPLES PER EPOCH | 1608.03983#50 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 51 | Figure 6: The median results of 5 runs for the best learning rate settings considered for WRN-28-1.
50K VS 100K EXAMPLES PER EPOCH
Our data augmentation procedure code is inherited from the Lasagne Recipe code for ResNets where ï¬ipped images are added to the training set. This doubles the number of training examples per epoch and thus might impact the results because hyperparameter values deï¬ned as a function of epoch index have a different meaning. While our experimental results given in Table 1 reproduced the results obtained by Zagoruyko & Komodakis (2016), here we test whether SGDR still makes sense for WRN-28-1 (i.e., ResNet with 28 layers) where one epoch corresponds to 50k training examples. We investigate different learning rate values for the default learning rate schedule (4 values out of [0.01, 0.025, 0.05, 0.1]) and SGDR (3 values out of [0.025, 0.05, 0.1]). In line with the results given in the main paper, Figure 6 suggests that SGDR is competitive in terms of anytime performance.
14
Published as a conference paper at ICLR 2017
# loss
# WRN-28-10 on CIFAR-10
# loss
# WRN-28-10 on CIFAR-100 | 1608.03983#51 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 52 | Published as a conference paper at ICLR 2017
# loss
# WRN-28-10 on CIFAR-10
# loss
# WRN-28-10 on CIFAR-100
âG-â Default, r=0.1 Default, Ir=0.05 = T= 50, T= 1 T y= 100, Ty = 0, Tutt * T + Traut = 2 0.8 ° & 1 1 0.6 So a 0.4 S ES 0.2 o wD ° Training crossâentropy + regularization Training crossâentropy + regularization 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-10 on CIFAR-10 WRN-28-10 on CIFAR-100 ° o Test cross-entropy loss Test cross-entropy loss ° & 0.7 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-10 on CIFAR-10 WRN-28-10 on CIFAR-100 5 1 21 20.5 s a ny iS} Test error (%) ES Test error (%) oS a 19 3.5 18.5 3 18 50 100 150 200 50 100 150 200 Epochs Epochs | 1608.03983#52 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
1608.03983 | 53 | Figure 7: Training cross-entropy + regularization loss (top row), test loss (middle row) and test error (bottom row) on CIFAR-10 (left column) and CIFAR-100 (right column).
15
Published as a conference paper at ICLR 2017
WRN-28-10 on downsampled 32x32 ImageNet
90 85 âO-â Default, Ir=0.050 80}:| ER Default, Ir=0.015 Default, Ir=0.005 SGDR, Ir=0.050 757] ââ¬- scor, 'r=0.015 âfâ SEDR, Ir=0.005 5 10 15 20 25 30 35 40 Epochs Top-5 test error (%) 70 | 1608.03983#53 | SGDR: Stochastic Gradient Descent with Warm Restarts | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR | http://arxiv.org/pdf/1608.03983 | Ilya Loshchilov, Frank Hutter | cs.LG, cs.NE, math.OC | ICLR 2017 conference paper | null | cs.LG | 20160813 | 20170503 | [
{
"id": "1703.05051"
},
{
"id": "1610.02915"
},
{
"id": "1510.01444"
},
{
"id": "1605.07146"
},
{
"id": "1603.05027"
},
{
"id": "1506.01186"
},
{
"id": "1502.04390"
},
{
"id": "1608.03983"
},
{
"id": "1512.03385"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1510.00149"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.