doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1603.06147 | 44 | Barry Haddow, Matthias Huck, Alexandra Birch, Niko- lay Bogoychev, and Philipp Koehn. 2015. The edin- burgh/jhu phrase-based machine translation systems In Proceedings of the Tenth Work- for wmt 2015. shop on Statistical Machine Translation, pages 126â 133.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Neural computation, Long short-term memory. 9(8):1735â1780.
The vanishing gradient problem during learning recurrent neural nets and International Journal of Un- problem solutions. certainty, Fuzziness and Knowledge-Based Systems, 6(02):107â116.
Changning Huang and Hai Zhao. 2007. Chinese word segmentation: A decade review. Journal of Chinese Information Processing, 21(3):8â20.
S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2.
Nal Kalchbrenner and Phil Blunsom. 2013. Recur- rent continuous translation models. In EMNLP, vol- ume 3, page 413. | 1603.06147#44 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 45 | Nal Kalchbrenner and Phil Blunsom. 2013. Recur- rent continuous translation models. In EMNLP, vol- ume 3, page 413.
Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M Rush. 2015. Character-aware neural lan- guage models. arXiv preprint arXiv:1508.06615.
Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Hyoung-Gyu Lee, JaeSong Lee, Jun-Seok Kim, and Chang-Ki Lee. 2015. Naver machine translation In Proceedings of the 2nd system for wat 2015. Workshop on Asian Translation (WAT2015), pages 69â73.
Wang Ling, Tiago Lu´ıs, Lu´ıs Marujo, Ram´on Fernan- dez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. Finding function in form: Compositional character models arXiv for open vocabulary word representation. preprint arXiv:1508.02096. | 1603.06147#45 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 46 | Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W 2015b. Character-based neural machine Black. translation. arXiv preprint arXiv:1511.04586.
Thang Luong, Richard Socher, and Christopher D Manning. Better word representations with recursive neural networks for morphology. In CoNLL, pages 104â113.
Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015a. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025.
Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Address- ing the rare word problem in neural machine trans- lation. arXiv preprint arXiv:1410.8206.
Tomas Mikolov, Martin Karaï¬Â´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In IN- TERSPEECH, volume 2, page 3. | 1603.06147#46 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 47 | Tomas Mikolov, Ilya Sutskever, Anoop Deoras, Hai- Son Le, Stefan Kombrink, and J Cernocky. 2012. Subword language modeling with neural networks. Preprint.
Graham Neubig, Taro Watanabe, Shinsuke Mori, and Tatsuya Kawahara. 2013. Substring-based machine translation. Machine translation, 27(2):139â166.
Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, 2013. How to construct arXiv preprint and Yoshua Bengio. deep recurrent neural networks. arXiv:1312.6026.
Raphael Rubino, Tommi Pirinen, Miquel Espla-Gomis, N LjubeËsic, Sergio Ortiz Rojas, Vassilis Papavassil- iou, Prokopis Prokopidis, and Antonio Toral. 2015. Abu-matran at wmt 2015 translation task: Morpho- logical segmentation and web crawling. In Proceed- ings of the Tenth Workshop on Statistical Machine Translation, pages 184â191. | 1603.06147#47 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 48 | Cicero D Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of-speech In Proceedings of the 31st International tagging. Conference on Machine Learning (ICML-14), pages 1818â1826.
Holger Schwenk. 2007. Continuous space language models. Computer Speech & Language, 21(3):492â 518.
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.
Rupesh K Srivastava, Klaus Greff, and J¨urgen Schmid- huber. 2015. Training very deep networks. In Ad- vances in Neural Information Processing Systems, pages 2368â2376.
Ilya Sutskever, James Martens, and Geoffrey E Hin- ton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICMLâ11), pages 1017â1024.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in Neural Information Process- ing Systems, pages 3104â3112. | 1603.06147#48 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.06147 | 49 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Fr´ed´eric Bastien, Justin Bayer, Anatoly Belikov, et al. 2016. Theano: A python framework for fast arXiv computation of mathematical expressions. preprint arXiv:1605.02688.
David Vilar, Jan-T Peter, and Hermann Ney. 2007. In Proceedings of the Can we translate letters? Second Workshop on Statistical Machine Transla- tion, pages 33â39. Association for Computational Linguistics.
Philip Williams, Rico Sennrich, Maria Nadejde, Matthias Huck, and Philipp Koehn. 2015. Edin- burghâs syntax-based systems at wmt 2015. In Pro- ceedings of the Tenth Workshop on Statistical Ma- chine Translation, pages 199â209.
2016. Efï¬cient character-level document classiï¬cation by combin- ing convolution and recurrent layers. arXiv preprint arXiv:1602.00367. | 1603.06147#49 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | [
{
"id": "1605.02688"
},
{
"id": "1512.00103"
},
{
"id": "1508.07909"
},
{
"id": "1508.06615"
},
{
"id": "1602.00367"
},
{
"id": "1508.04025"
},
{
"id": "1603.00810"
},
{
"id": "1511.04586"
},
{
"id": "1508.02096"
}
] |
1603.05027 | 0 | # Identity Mappings in Deep Residual Networks
6 1 0 2
l u J 5 2 ] V C . s c [
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun
Microsoft Research
Abstract Deep residual networks [1] have emerged as a family of ex- tremely deep architectures showing compelling accuracy and nice con- vergence behaviors. In this paper, we analyze the propagation formu- lations behind the residual building blocks, which suggest that the for- ward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connec- tions and after-addition activation. A series of ablation experiments sup- port the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: https://github.com/KaimingHe/ resnet-1k-layers.
3 v 7 2 0 5 0 . 3 0 6 1 : v i X r a
# 1 Introduction | 1603.05027#0 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05027 | 1 | 3 v 7 2 0 5 0 . 3 0 6 1 : v i X r a
# 1 Introduction
Deep residual networks (ResNets) [1] consist of many stacked âResidual Unitsâ. Each unit (Fig. 1 (a)) can be expressed in a general form:
yl = h(xl) + F(xl, Wl), xl+1 = f (yl),
where xl and xl+1 are input and output of the l-th unit, and F is a residual function. In [1], h(xl) = xl is an identity mapping and f is a ReLU [2] function. ResNets that are over 100-layer deep have shown state-of-the-art accuracy for several challenging recognition tasks on ImageNet [3] and MS COCO [4] compe- titions. The central idea of ResNets is to learn the additive residual function F with respect to h(xl), with a key choice of using an identity mapping h(xl) = xl. This is realized by attaching an identity skip connection (âshortcutâ). | 1603.05027#1 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 1 | Abstract. We propose two efï¬cient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight- Networks, the ï¬lters are approximated with binary values resulting in 32à mem- ory saving. In XNOR-Networks, both the ï¬lters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily bi- nary operations. This results in 58à faster convolutional operations (in terms of number of the high precision operations) and 32à memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efï¬cient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classiï¬- cation task. The classiï¬cation accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and out- perform these methods by large margins on ImageNet, more than 16% in top-1 accuracy. Our code is available at: http://allenai.org/plato/xnornet. | 1603.05279#1 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 2 | In this paper, we analyze deep residual networks by focusing on creating a âdirectâ path for propagating information â not only within a residual unit, but through the entire network. Our derivations reveal that if both h(xl) and f (yl) are identity mappings, the signal could be directly propagated from one unit to any other units, in both forward and backward passes. Our experiments empirically show that training in general becomes easier when the architecture is closer to the above two conditions.
To understand the role of skip connections, we analyze and compare various types of h(xl). We ï¬nd that the identity mapping h(xl) = xl chosen in [1]
2
2-4 20 x) x) H â, ResNetâ1001, original (error: 7.61%) sen ResNetâ1001, proposed (error: 4.92%) ~~ ~~ \ weight BN 15 + + BN RelU 02 M 4 Fa RelU weight 8 4 + 2 weight BN = e £ BN RelU a 1 0.02 ac addition weight 5 a "y â RelU addition My ths : Â¥ v th VW at x xi Teg PNA Why tt 1 . 0.002 te 0 (a) original (b) proposed o 1 2 3 4 5 6 Iterations x10" | 1603.05027#2 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 2 | # 1 Introduction
Deep neural networks (DNN) have shown signiï¬cant improvements in several applica- tion domains including computer vision and speech recognition. In computer vision, a particular type of DNN, known as Convolutional Neural Networks (CNN), have demon- strated state-of-the-art results in object recognition [1,2,3,4] and detection [5,6,7].
Convolutional neural networks show reliable results on object recognition and de- tection that are useful in real world applications. Concurrent to the recent progress in recognition, interesting advancements have been happening in virtual reality (VR by Oculus) [8], augmented reality (AR by HoloLens) [9], and smart wearable devices. Putting these two pieces together, we argue that it is the right time to equip smart portable devices with the power of state-of-the-art recognition systems. However, CNN- based recognition systems need large amounts of memory and computational power. While they perform well on expensive, GPU-based machines, they are often unsuitable for smaller devices like cell phones and embedded electronics. | 1603.05279#2 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 3 | Figure 1. Left: (a) original Residual Unit in [1]; (b) proposed Residual Unit. The grey arrows indicate the easiest paths for the information to propagate, corresponding to the additive term âxlâ in Eqn.(4) (forward propagation) and the additive term â1â in Eqn.(5) (backward propagation). Right: training curves on CIFAR-10 of 1001-layer ResNets. Solid lines denote test error (y-axis on the right), and dashed lines denote training loss (y-axis on the left). The proposed unit makes ResNet-1001 easier to train.
achieves the fastest error reduction and lowest training loss among all variants we investigated, whereas skip connections of scaling, gating [5,6,7], and 1Ã1 convolutions all lead to higher training loss and error. These experiments suggest that keeping a âcleanâ information path (indicated by the grey arrows in Fig. 1, 2, and 4) is helpful for easing optimization. | 1603.05027#3 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 3 | For example, AlexNet[1] has 61M parameters (249MB of memory) and performs 1.5B high precision operations to classify one image. These numbers are even higher for deeper CNNs e.g.,VGG [2] (see section 4.1). These models quickly overtax the limited storage, battery power, and compute capabilities of smaller devices like cell phones.
2 Rastegari et al.
Network Variations Operations | Memory | Computation | accuracy on used in Saving Saving ImageNet Convolution | (Inference) | (Inference) | (AAlexNet) Real-Value inputs Standard Convolution [o.1-021 -034 +,7-,% 1x 1x %56.7 } Input wo2s061 082 pal f 7 Real-Value Inputs ho A) Binary Weights p.|. «Binary Weight |o.11.021 2038 irr +,7- ~32x ~2x %56.8 Neight 0.25 0.61... 052°) Ad Win - 2 Binary inputs ye BinaryWeight nay wees | NOR Binary input || 4-14 TE "| 32x ~58x 44.2 (KNORNet) | ttt Ee bitcount | 1603.05279#3 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 4 | To construct an identity mapping f (yl) = yl, we view the activation func- tions (ReLU and BN [8]) as âpre-activationâ of the weight layers, in contrast to conventional wisdom of âpost-activationâ. This point of view leads to a new residual unit design, shown in (Fig. 1(b)). Based on this unit, we present com- petitive results on CIFAR-10/100 with a 1001-layer ResNet, which is much easier to train and generalizes better than the original ResNet in [1]. We further report improved results on ImageNet using a 200-layer ResNet, for which the counter- part of [1] starts to overï¬t. These results suggest that there is much room to exploit the dimension of network depth, a key to the success of modern deep learning.
# 2 Analysis of Deep Residual Networks
The ResNets developed in [1] are modularized architectures that stack building blocks of the same connecting shape. In this paper we call these blocks âResidual
# z a 10e g &
Unitsâ. The original Residual Unit in [1] performs the following computation:
(1)
yl = h(xl) + F(xl, Wl), xl+1 = f (yl).
(2) | 1603.05027#4 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 4 | Fig. 1: We propose two efï¬cient variations of convolutional neural networks. Binary- Weight-Networks, when the weight ï¬lters contains binary values. XNOR-Networks, when both weigh and input have binary values. These networks are very efï¬cient in terms of memory and computation, while being very accurate in natural image classiï¬- cation. This offers the possibility of using accurate vision techniques in portable devices with limited resources.
In this paper, we introduce simple, efï¬cient, and accurate approximations to CNNs by binarizing the weights and even the intermediate representations in convolutional neural networks. Our binarization method aims at ï¬nding the best approximations of the convolutions using binary operations. We demonstrate that our way of binarizing neural networks results in ImageNet classiï¬cation accuracy numbers that are comparable to standard full precision networks while requiring a signiï¬cantly less memory and fewer ï¬oating point operations. | 1603.05279#4 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 5 | (1)
yl = h(xl) + F(xl, Wl), xl+1 = f (yl).
(2)
Here xl is the input feature to the l-th Residual Unit. Wl = {Wl,k|1â¤kâ¤K} is a set of weights (and biases) associated with the l-th Residual Unit, and K is the number of layers in a Residual Unit (K is 2 or 3 in [1]). F denotes the residual function, e.g., a stack of two 3Ã3 convolutional layers in [1]. The function f is the operation after element-wise addition, and in [1] f is ReLU. The function h is set as an identity mapping: h(xl) = xl.1
If f is also an identity mapping: xl+1 â¡ yl, we can put Eqn.(2) into Eqn.(1) and obtain:
xl+1 = xl + F(xl, Wl). (3)
Recursively (xl+2 = xl+1 + F (xl+1, Wl+1) = xl + F (xl, Wl) + F (xl+1, Wl+1), etc.) we will have: | 1603.05027#5 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 5 | We study two approximations: Neural networks with binary weights and XNOR- Networks. In Binary-Weight-Networks all the weight values are approximated with bi- nary values. A convolutional neural network with binary weights is signiï¬cantly smaller (â¼ 32Ã) than an equivalent network with single-precision weight values. In addition, when weight values are binary, convolutions can be estimated by only addition and subtraction (without multiplication), resulting in â¼ 2à speed up. Binary-weight ap- proximations of large CNNs can ï¬t into the memory of even small, portable devices while maintaining the same level of accuracy (See Section 4.1 and 4.2). | 1603.05279#5 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 6 | L-1 Xr =x + > F(xi,M), (4) i=l
for any deeper unit L and any shallower unit 1. Eqn.(4) exhibits some nice properties. (i) The feature xy, of any deeper unit L can be represented as the feature x; of any shallower unit | plus a residual function in a form of ae F, indicating that the model is in a residual fashion between any units L and 1. (ii) The feature x, = xo + are F (xi, Wi), of any deep unit L, is the summation of the outputs of all preceding residual functions (plus xo). This is in contrast to a âplain networkâ where a feature xz is a series of matrix-vector products, say, en W;Xo (ignoring BN and ReLU).
i=0 Wix0 (ignoring BN and ReLU). Eqn.(4) also leads to nice backward propagation properties. Denoting the
loss function as E, from the chain rule of backpropagation [9] we have:
Ox; Oxy Ox, Oxy, â g L-1 ae gee = HE 1a ) (5) Dag De F(x Mi) i=l | 1603.05027#6 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 6 | To take this idea further, we introduce XNOR-Networks where both the weights and the inputs to the convolutional and fully connected layers are approximated with binary values1. Binary weights and binary inputs allow an efï¬cient way of implement- ing convolutional operations. If all of the operands of the convolutions are binary, then the convolutions can be estimated by XNOR and bitcounting operations [11]. XNOR- Nets result in accurate approximation of CNNs while offering â¼ 58à speed up in CPUs (in terms of number of the high precision operations). This means that XNOR-Nets can enable real-time inference in devices with small memory and no GPUs (Inference in XNOR-Nets can be done very efï¬ciently on CPUs).
To the best of our knowledge this paper is the ï¬rst attempt to present an evalua- tion of binary neural networks on large-scale datasets like ImageNet. Our experimental
1 fully connected layers can be implemented by convolution, therefore, in the rest of the paper, we refer to them also as convolutional layers [10].
XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks | 1603.05279#6 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 7 | Ox; Oxy Ox, Oxy, â g L-1 ae gee = HE 1a ) (5) Dag De F(x Mi) i=l
Eqn.(5) indicates that the gradient ge can be decomposed into two additive terms: a term of ee that propagates information directly without concern- ing any weight layers and another term of 2& (or wig! F) that propagates Oxp, through the weight layers. The additive term of 2 oa ensures that information is directly propagated back to any shallower unit I. Eqn. (5) also suggests that it
1 It is noteworthy that there are Residual Units for increasing dimensions and reducing feature map sizes [1] in which h is not identity. In this case the following derivations do not hold strictly. But as there are only a very few such units (two on CIFAR and three on ImageNet, depending on image sizes [1]), we expect that they do not have the exponential impact as we present in Sec. 3. One may also think of our derivations as applied to all Residual Units within the same feature map size.
3
4 | 1603.05027#7 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 7 | XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks
results show that our proposed method for binarizing convolutional neural networks outperforms the state-of-the-art network binarization method of [11] by a large margin (16.3%) on top-1 image classiï¬cation in the ImageNet challenge ILSVRC2012. Our contribution is two-fold: First, we introduce a new way of binarizing the weight val- ues in convolutional neural networks and show the advantage of our solution compared to state-of-the-art solutions. Second, we introduce XNOR-Nets, a deep neural network model with binary weights and binary inputs and show that XNOR-Nets can obtain sim- ilar classiï¬cation accuracies compared to standard networks while being signiï¬cantly more efï¬cient. Our code is available at: http://allenai.org/plato/xnornet
# 2 Related Work
Deep neural networks often suffer from over-parametrization and large amounts of re- dundancy in their models. This typically results in inefï¬cient computation and memory usage[12]. Several methods have been proposed to address efï¬cient training and infer- ence in deep neural networks. | 1603.05279#7 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 8 | 3
4
is unlikely for the gradient âE to be canceled out for a mini-batch, because in âxl general the term â i=l F cannot be always -1 for all samples in a mini-batch. âxl This implies that the gradient of a layer does not vanish even when the weights are arbitrarily small.
# Discussions
Eqn.(4) and Eqn.(5) suggest that the signal can be directly propagated from any unit to another, both forward and backward. The foundation of Eqn.(4) is two identity mappings: (i) the identity skip connection h(xl) = xl, and (ii) the condition that f is an identity mapping.
These directly propagated information ï¬ows are represented by the grey ar- rows in Fig. 1, 2, and 4. And the above two conditions are true when these grey arrows cover no operations (expect addition) and thus are âcleanâ. In the fol- lowing two sections we separately investigate the impacts of the two conditions.
# 3 On the Importance of Identity Skip Connections
Letâs consider a simple modiï¬cation, h(xl) = λlxl, to break the identity shortcut:
xl+1 = λlxl + F(xl, Wl), (6) | 1603.05027#8 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 8 | Shallow networks: Estimating a deep neural network with a shallower model re- duces the size of a network. Early theoretical work by Cybenko shows that a network with a large enough single hidden layer of sigmoid units can approximate any decision boundary [13]. In several areas (e.g.,vision and speech), however, shallow networks cannot compete with deep models [14]. [15] trains a shallow network on SIFT features to classify the ImageNet dataset. They show it is difï¬cult to train shallow networks with large number of parameters. [16] provides empirical evidence on small datasets (e.g.,CIFAR-10) that shallow nets are capable of learning the same functions as deep nets. In order to get the similar accuracy, the number of parameters in the shallow net- work must be close to the number of parameters in the deep network. They do this by ï¬rst training a state-of-the-art deep model, and then training a shallow model to mimic the deep model. These methods are different from our approach because we use the standard deep architectures not the shallow estimations. | 1603.05279#8 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 9 | xl+1 = λlxl + F(xl, Wl), (6)
where 4; is a modulating scalar (for simplicity we still assume f is identity). Recursively applying this formulation we obtain an equation similar to Eqn. (4): xp = a 1 "Nix + ear 7 (jn ini As) F (xi, Wi), or simply:
- L-1 xp = di d)xi + SO F(«:,Wi), (7) i=l i=l
where the notation ËF absorbs the scalars into the residual functions. Similar to Eqn.(5), we have backpropagation of the following form:
ae ae ( a eo ae, (UID âmF âmm. ®)
Unlike Eqn.(5), in Eqn.(8) the first additive term is modulated by a factor We 7) di. For an extremely deep network (L is large), if 4; > 1 for all i, this factor can be exponentially large; if A; < 1 for all i, this factor can be expo- nentially small and vanish, which blocks the backpropagated signal from the shortcut and forces it to flow through the weight layers. This results in opti- mization difficulties as we show by experiments. | 1603.05027#9 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 9 | Compressing pre-trained deep networks: Pruning redundant, non-informative weights in a previously trained network reduces the size of the network at inference time. Weight decay [17] was an early method for pruning a network. Optimal Brain Damage [18] and Optimal Brain Surgeon [19] use the Hessian of the loss function to prune a network by reducing the number of connections. Recently [20] reduced the number of parameters by an order of magnitude in several state-of-the-art neural net- works by pruning. [21] proposed to reduce the number of activations for compression and acceleration. Deep compression [22] reduces the storage and energy required to run inference on large networks so they can be deployed on mobile devices. They remove the redundant connections and quantize weights so that multiple connections share the same weight, and then they use Huffman coding to compress the weights. HashedNets [23] uses a hash function to reduce model size by randomly grouping the weights, such that connections in a hash bucket use a single parameter value. Matrix factorization has been used by [24,25]. We are different from these approaches because we do not use a pretrained network. We train binary networks from scratch.
4 Rastegari et al. | 1603.05279#9 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 10 | In the above analysis, the original identity skip connection in Eqn.(3) is re- placed with a simple scaling h(x;) = A;x:. If the skip connection h(x;) represents more complicated transforms (such as gating and 1x1 convolutions), in Eqn.(8) the first term becomes Wa hi, where hâ is the derivative of h. This product may also impede information propagation and hamper the training procedure as witnessed in the following experiments.
âfBxsconv âfBxsconv yeu Trew 3x3 conv 3x3 conv i - addition addition y= (a) original (b) constant scaling (sa conv Rel ~ss conv Trew ixicony| [Bx conv ixtcony] [SxS conv emod = âaddition | . . addition . ye (c) exclusive gating ney (d) shortcut-only gating 3x conv ~{BxS conv Raw TRaw ixi conv 3x3 conv dropout 3x3 conv addition addition Rely (e) conv shortcut Ret (f) dropout shortcut | 1603.05027#10 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 10 | 4 Rastegari et al.
Designing compact layers: Designing compact blocks at each layer of a deep net- work can help to save memory and computational costs. Replacing the fully connected layer with global average pooling was examined in the Network in Network architec- ture [26], GoogLenet[3] and Residual-Net[4], which achieved state-of-the-art results on several benchmarks. The bottleneck structure in Residual-Net [4] has been proposed to reduce the number of parameters and improve speed. Decomposing 3 Ã 3 convo- lutions with two 1 Ã 1 is used in [27] and resulted in state-of-the-art performance on object recognition. Replacing 3 Ã 3 convolutions with 1 Ã 1 convolutions is used in [28] to create a very compact neural network that can achieve â¼ 50Ã reduction in the number of parameters while obtaining high accuracy. Our method is different from this line of work because we use the full network (not the compact version) but with binary parameters. | 1603.05279#10 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 11 | Figure 2. Various types of shortcut connections used in Table 1. The grey arrows indicate the easiest paths for the information to propagate. The shortcut connections in (b-f) are impeded by diï¬erent components. For simplifying illustrations we do not display the BN layers, which are adopted right after the weight layers for all units here.
# 3.1 Experiments on Skip Connections
We experiment with the 110-layer ResNet as presented in [1] on CIFAR-10 [10]. This extremely deep ResNet-110 has 54 two-layer Residual Units (consisting of 3Ã3 convolutional layers) and is challenging for optimization. Our implementa- tion details (see appendix) are the same as [1]. Throughout this paper we report the median accuracy of 5 runs for each architecture on CIFAR, reducing the impacts of random variations.
Though our above analysis is driven by identity f , the experiments in this section are all based on f = ReLU as in [1]; we address identity f in the next sec- tion. Our baseline ResNet-110 has 6.61% error on the test set. The comparisons of other variants (Fig. 2 and Table 1) are summarized as follows: | 1603.05027#11 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 11 | ing high performance in deep networks. [29] proposed to quantize the weights of fully connected layers in a deep network by vector quantization techniques. They showed just thresholding the weight values at zero only decreases the top-1 accuracy on ILSVRC2012 by less than %10. [30] proposed a provably polynomial time algorithm for training a sparse networks with +1/0/-1 weights. A ï¬xed-point implementation of 8-bit integer was compared with 32-bit ï¬oating point activations in [31]. Another ï¬xed-point net- work with ternary weights and 3-bits activations was presented by [32]. Quantizing a network with L2 error minimization achieved better accuracy on MNIST and CIFAR-10 datasets in [33]. [34] proposed a back-propagation process by quantizing the represen- tations at each layer of the network. To convert some of the remaining multiplications into binary shifts the neurons get restricted values of power-of-two integers. In [34] they carry the full precision weights during the test phase, and only quantize the neu- rons during the back-propagation process, and not during the forward-propagation. Our work is similar to these methods since we are quantizing the parameters in the network. But our quantization is the extreme scenario +1,-1. | 1603.05279#11 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 12 | Constant scaling. We set λ = 0.5 for all shortcuts (Fig. 2(b)). We further study two cases of scaling F: (i) F is not scaled; or (ii) F is scaled by a constant scalar of 1 â λ = 0.5, which is similar to the highway gating [6,7] but with frozen gates. The former case does not converge well; the latter is able to converge, but the test error (Table 1, 12.35%) is substantially higher than the original ResNet-110. Fig 3(a) shows that the training error is higher than that of the original ResNet-110, suggesting that the optimization has diï¬culties when the shortcut signal is scaled down.
5
6
Table 1. Classiï¬cation error on the CIFAR-10 test set using ResNet-110 [1], with diï¬erent types of shortcut connections applied to all Residual Units. We report âfailâ when the test error is higher than 20%. | 1603.05027#12 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 12 | Network binarization: These works are the most related to our approach. Several methods attempt to binarize the weights and the activations in neural networks.The per- formance of highly quantized networks (e.g.,binarized) were believed to be very poor due to the destructive property of binary quantization [35]. Expectation BackPropaga- tion (EBP) in [36] showed high performance can be achieved by a network with binary weights and binary activations. This is done by a variational Bayesian approach, that infers networks with binary weights and neurons. A fully binary network at run time presented in [37] using a similar approach to EBP, showing signiï¬cant improvement in energy efï¬ciency. In EBP the binarized parameters were only used during inference. Bi- naryConnect [38] extended the probablistic idea behind EBP. Similar to our approach, BinaryConnect uses the real-valued version of the weights as a key reference for the binarization process. The real-valued weight updated using the back propagated error by simply ignoring the binarization in the update. BinaryConnect achieved state-of-the- art results on small datasets (e.g.,CIFAR-10, | 1603.05279#12 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 13 | case Fig. on shortcut on F error (%) remark original [1] Fig. 2(a) 1 1 6.61 constant scaling Fig. 2(b) 0 0.5 1 1 fail fail This is a plain net 0.5 0.5 12.35 frozen gating exclusive gating Fig. 2(c) 1 â g(x) 1 â g(x) g(x) g(x) fail 8.70 init bg =0 to â5 init bg =-6 1 â g(x) g(x) 9.81 init bg =-7 shortcut-only gating Fig. 2(d) 1 â g(x) 1 â g(x) 1 1 12.86 6.91 init bg =0 init bg =-6 1Ã1 conv shortcut Fig. 2(e) 1Ã1 conv 1 12.22 dropout shortcut Fig. 2(f) dropout 0.5 1 fail | 1603.05027#13 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 13 | ignoring the binarization in the update. BinaryConnect achieved state-of-the- art results on small datasets (e.g.,CIFAR-10, SVHN). Our experiments shows that this method is not very successful on large-scale datsets (e.g.,ImageNet). BinaryNet[11] propose an extention of BinaryConnect, where both weights and activations are bi- narized. Our method is different from them in the binarization method and the netXNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks | 1603.05279#13 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 14 | Exclusive gating. Following the Highway Networks [6,7] that adopt a gating mechanism [5], we consider a gating function g(x) = Ï(Wgx + bg) where a transform is represented by weights Wg and biases bg followed by the sigmoid function Ï(x) = 1 1+eâx . In a convolutional network g(x) is realized by a 1Ã1 convolutional layer. The gating function modulates the signal by element-wise multiplication. | 1603.05027#14 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 14 | work structure. We also compare our method with BinaryNet on ImageNet, and our method outperforms BinaryNet by a large margin.[39] argued that the noise introduced by weight binarization provides a form of regularization, which could help to improve test accuracy. This method binarizes weights while maintaining full precision activa- tion. [40] proposed fully binary training and testing in an array of committee machines with randomized input. [41] retraine a previously trained neural network with binary weights and binary inputs.
# 3 Binary Convolutional Neural Network | 1603.05279#14 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 15 | We investigate the âexclusiveâ gates as used in [6,7] â the F path is scaled by g(x) and the shortcut path is scaled by 1âg(x). See Fig 2(c). We ï¬nd that the initialization of the biases bg is critical for training gated models, and following the guidelines2 in [6,7], we conduct hyper-parameter search on the initial value of bg in the range of 0 to -10 with a decrement step of -1 on the training set by cross- validation. The best value (â6 here) is then used for training on the training set, leading to a test result of 8.70% (Table 1), which still lags far behind the ResNet-110 baseline. Fig 3(b) shows the training curves. Table 1 also reports the results of using other initialized values, noting that the exclusive gating network does not converge to a good solution when bg is not appropriately initialized.
The impact of the exclusive gating mechanism is two-fold. When 1 â g(x) approaches 1, the gated shortcut connections are closer to identity which helps information propagation; but in this case g(x) approaches 0 and suppresses the function F. To isolate the eï¬ects of the gating functions on the shortcut path alone, we investigate a non-exclusive gating mechanism in the next. | 1603.05027#15 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 15 | # 3 Binary Convolutional Neural Network
We represent an L-layer CNN architecture with a triplet (Z, W, x). Z is a set of ten- sors, where each element I = Z)(j~1,...,:) is the input tensor for the 1 layer of CNN (Green cubes in figure 1). W is a set of tensors, where each element in this set W = cos x?) is the k⢠weight filter in the 7" layer of the CNN. K' is the number of weight filters in the /'" layer of the CNN. * represents a convolutional operation with I and W as its operandsâ. I ⬠R°*ââ¢*"m, where (c, win, hin) represents channels, width and height respectively.W ⬠R°*â*", where w < win, h < hin. We propose two variations of binary CNN: Binary-weights, where the elements of W are binary tensors and XNOR-Networks, where elements of both Z and W are binary tensors. Wik(k=1
# 3.1 Binary-Weight-Networks | 1603.05279#15 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 16 | Shortcut-only gating. In this case the function F is not scaled; only the shortcut path is gated by 1 â g(x). See Fig 2(d). The initialized value of bg is still essential in this case. When the initialized bg is 0 (so initially the expectation of 1 â g(x) is 0.5), the network converges to a poor result of 12.86% (Table 1). This is also caused by higher training error (Fig 3(c)).
# 2 See also: people.idsia.ch/~rupesh/very_deep_learning/ by [6,7].
02 (09) 2011350 Training Loss (99) 101131891 San as i â 5 a Wailea laa 110 original iy ââ 110, const scaling (0.5, 0.5) 0.002 ° ° T 2 3 4 5 Iterations (a) (6) 0013 3891 Training Loss (6) 0013891 a 5 AW Ay Mi My 110, original Hethoe 110, original âia ; â 110, shortcut-only gating (nit b=0) [Whit â 110, 1x1 conv shortcut HA pat 0.002 Jo 0.002 Jo 0 1 2 3 4 5 6 0 7 2 3 4 5 6 erations 10! erations ero! (c) (d)
# Training Loss
# Training Loss | 1603.05027#16 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 16 | # 3.1 Binary-Weight-Networks
In order to constrain a convolutional neural network (Z, W, *) to have binary weights, we estimate the real-value weight filter W ⬠W using a binary filter B ⬠{+1, â1}**â*" and a scaling factor a ⬠R* such that W ~ aB. A convolutional operation can be ap- priximated by:
I â W â (I â B) α (1)
where, © indicates a convolution without any multiplication. Since the weight values are binary, we can implement the convolution with additions and subtractions. The bi- nary weight filters reduce memory usage by a factor of ~ 32x compared to single- precision filters. We represent a CNN with binary weights by (Z, B, A, ©), where B is a set of binary tensors and A is a set of positive real scalars, such that B = Bix is a binary filter and a = Aj, is an scaling factor and Wiz © AinBir
Estimating binary weights: Without loss of generality we assume W, B are vectors in Rn, where n = c à w à h. To ï¬nd an optimal estimation for W â αB, we solve the following optimization:
a*, B* = argminJ(B, a) 2) aB | 1603.05279#16 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 17 | # Training Loss
# Training Loss
Figure 3. Training curves on CIFAR-10 of various shortcuts. Solid lines denote test error (y-axis on the right), and dashed lines denote training loss (y-axis on the left).
When the initialized bg is very negatively biased (e.g., â6), the value of 1 â g(x) is closer to 1 and the shortcut connection is nearly an identity mapping. Therefore, the result (6.91%, Table 1) is much closer to the ResNet-110 baseline. 1Ã1 convolutional shortcut. Next we experiment with 1Ã1 convolutional shortcut connections that replace the identity. This option has been investigated in [1] (known as option C) on a 34-layer ResNet (16 Residual Units) and shows good results, suggesting that 1Ã1 shortcut connections could be useful. But we ï¬nd that this is not the case when there are many Residual Units. The 110-layer ResNet has a poorer result (12.22%, Table 1) when using 1Ã1 convolutional shortcuts. Again, the training error becomes higher (Fig 3(d)). When stacking so many Residual Units (54 for ResNet-110), even the shortest path may still impede signal propagation. We witnessed similar phenomena on ImageNet with ResNet-101 when using 1Ã1 convolutional shortcuts. | 1603.05027#17 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 17 | a*, B* = argminJ(B, a) 2) aB
2 In this paper we assume convolutional ï¬lters do not have bias terms
6 Rastegari et al.
by expanding equation 2, we have
J(B, α) = α2BTB â 2αWTB + WTW (3)
since B â {+1, â1}n, BTB = n is a constant . WTW is also a constant because W is a known variable. Lets deï¬ne c = WTW. Now, we can rewrite the equation 3 as follow: J(B, α) = α2n â 2αWTB + c. The optimal solution for B can be achieved by maximizing the following constrained optimization: (note that α is a positive value in equation 2, therefore it can be ignored in the maximization)
Bâ = argmax {WTB} s.t. B â {+1, â1}n B (4)
This optimization can be solved by assigning Bi = +1 if Wi ⥠0 and Bi = â1 if Wi < 0, therefore the optimal solution is Bâ = sign(W). In order to ï¬nd the optimal value for the scaling factor αâ, we take the derivative of J with respect to α and set it to zero:
αâ = WTBâ n (5) | 1603.05279#17 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 18 | Dropout shortcut. Last we experiment with dropout [11] (at a ratio of 0.5) which we adopt on the output of the identity shortcut (Fig. 2(f)). The network fails to converge to a good solution. Dropout statistically imposes a scale of λ with an expectation of 0.5 on the shortcut, and similar to constant scaling by 0.5, it impedes signal propagation.
7
8
Table 2. Classiï¬cation error (%) on the CIFAR-10 test set using diï¬erent activation functions.
case Fig. ResNet-110 ResNet-164 original Residual Unit [1] Fig. 4(a) 6.61 5.93 BN after addition Fig. 4(b) 8.17 6.50 ReLU before addition Fig. 4(c) 7.84 6.14 ReLU-only pre-activation Fig. 4(d) 6.71 5.91 full pre-activation Fig. 4(e) 6.37 5.46 | 1603.05027#18 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 18 | αâ = WTBâ n (5)
By replacing Bâ with sign(W)
, W' sign(W W; 1 a gn(W) _ de |Wil Whe 6) n n n
therefore, the optimal estimation of a binary weight ï¬lter can be simply achieved by taking the sign of weight values. The optimal scaling factor is the average of absolute weight values.
Training Binary-Weights-Networks: Each iteration of training a CNN involves three steps; forward pass, backward pass and parameters update. To train a CNN with binary weights (in convolutional layers), we only binarize the weights during the forward pass and backward propagation. To compute the gradient for sign function sign(r), we fol- low the same approach as [11], where â sign âr = r1|r|â¤1. The gradient in backward after n + â sign the scaled sign function is âC ( 1 α). For updating the parameters, we âWi âWi use the high precision (real-value) weights. Because, in gradient descend the parameter changes are tiny, binarization after updating the parameters ignores these changes and the training objective can not be improved. [11,38] also employed this strategy to train a binary network. | 1603.05279#18 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 19 | x x xX xX) xX ~~ a â . se weight weight weight ReLU BN 1 1 t t BN BN BN weight ReLU t 4 1 ReLU ReLU ReLU BN weight t 1 1 weight weight weight ReLU BN _BN addition BN weight ReLU a 1 addition BN ReLU BN weight a me a ReLU ReLU addition addition addition v Â¥ Xt Xt Xt Xr Xie oe (b) BN after (c) ReLU before (d) ReLU-only age (a) original addition addition pre-activation (©) full pre-activation
Figure 4. Various usages of activation in Table 2. All these units consist of the same components â only the orders are diï¬erent.
# 3.2 Discussions
As indicated by the grey arrows in Fig. 2, the shortcut connections are the most direct paths for the information to propagate. Multiplicative manipulations (scaling, gating, 1Ã1 convolutions, and dropout) on the shortcuts can hamper information propagation and lead to optimization problems. | 1603.05027#19 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 19 | Algorithm | demonstrates our procedure for training a CNN with binary weights. First, we binarize the weight filters at each layer by computing B and A. Then we call forward propagation using binary weights and its corresponding scaling factors, where all the convolutional operations are carried out by equation 1. Then, we call backward propagation, where the gradients are computed with respect to the estimated weight filters W. Lastly, the parameters and the learning rate gets updated by an update rule e.g.,SGD update with momentum or ADAM [42].
Once the training ï¬nished, there is no need to keep the real-value weights. Because, at inference we only perform forward propagation with the binarized weights.
XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks
Algorithm 1 Training an L-layers CNN with binary weights: Input: A minibatch of inputs and targets (I, Y), cost function C(Y, ËY), current weight W t and
current learning rate ηt.
Output: updated weight W t+1 and updated learning rate ηt+1. 1: Binarizing weight ï¬lters: 2: for l = 1 to L do 3: 4: 5: 6: 7: ËY = BinaryForward(I, B, A) // standard forward propagation except that convolutions are computed | 1603.05279#19 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 20 | It is noteworthy that the gating and 1Ã1 convolutional shortcuts introduce more parameters, and should have stronger representational abilities than iden- tity shortcuts. In fact, the shortcut-only gating and 1Ã1 convolution cover the solution space of identity shortcuts (i.e., they could be optimized as identity shortcuts). However, their training error is higher than that of identity short- cuts, indicating that the degradation of these models is caused by optimization issues, instead of representational abilities.
# 4 On the Usage of Activation Functions
Experiments in the above section support the analysis in Eqn.(5) and Eqn.(8), both being derived under the assumption that the after-addition activation f
is the identity mapping. But in the above experiments f is ReLU as designed in [1], so Eqn.(5) and (8) are approximate in the above experiments. Next we investigate the impact of f . | 1603.05027#20 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 20 | for kâ filter in I" layer do 1 Atk = £\|Wieller
# 1 Atk = £\|Wieller Bu = sign(Wix) Win = An Bir
using equation 1 or 11
8: 2& â BinaryBackward( aw _ W instead of Wt ac ay? W) // standard backward propagation except that gradients are computed
8: 2& â BinaryBackward( aw _ using W instead of Wt
9: W'** = UpdateParameters(Wâ, xs, me) / Any update rules (e.g.,SGD or ADAM) 10: 7'*! = UpdateLearningrate(7â, t) // Any learning rate scheduling function
# 3.2 XNOR-Networks | 1603.05279#20 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 21 | We want to make f an identity mapping, which is done by re-arranging the activation functions (ReLU and/or BN). The original Residual Unit in [1] has a shape in Fig. 4(a) â BN is used after each weight layer, and ReLU is adopted after BN except that the last ReLU in a Residual Unit is after element- wise addition (f = ReLU). Fig. 4(b-e) show the alternatives we investigated, explained as following.
# 4.1 Experiments on Activation
In this section we experiment with ResNet-110 and a 164-layer Bottleneck [1] architecture (denoted as ResNet-164). A bottleneck Residual Unit consist of a 1Ã1 layer for reducing dimension, a 3Ã3 layer, and a 1Ã1 layer for restoring dimension. As designed in [1], its computational complexity is similar to the two-3Ã3 Residual Unit. More details are in the appendix. The baseline ResNet- 164 has a competitive result of 5.93% on CIFAR-10 (Table 2). | 1603.05027#21 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 21 | # 3.2 XNOR-Networks
So far, we managed to find binary weights and a scaling factor to estimate the real- value weights. The inputs to the convolutional layers are still real-value tensors. Now, we explain how to binarize both weigths and inputs, so convolutions can be imple- mented efficiently using XNOR and bitcounting operations. This is the key element of our XNOR-Networks. In order to constrain a convolutional neural network (Z, W, *) to have binary weights and binary inputs, we need to enforce binary operands at each step of the convolutional operation. A convolution consist of repeating a shift operation and a dot product. Shift operation moves the weight filter over the input and the dot product performs element-wise multiplications between the values of the weight filter and the corresponding part of the input. If we express dot product in terms of binary operations, convolution can be approximated using binary operations. Dot product be- tween two binary vectors can be implemented by XNOR-Bitcounting operations [11]. In this section, we explain how to approximate the dot product between two vectors in R" by a dot product between two vectors in {+1, â1}". Next, we demonstrate how to use this approximation for estimating a convolutional operation between two tensors. | 1603.05279#21 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 22 | BN after addition. Before turning f into an identity mapping, we go the opposite way by adopting BN after addition (Fig. 4(b)). In this case f involves BN and ReLU. The results become considerably worse than the baseline (Ta- ble 2). Unlike the original design, now the BN layer alters the signal that passes through the shortcut and impedes information propagation, as reï¬ected by the diï¬culties on reducing training loss at the beginning of training (Fib. 6 left).
ReLU before addition. A na¨ıve choice of making f into an identity map- ping is to move the ReLU before addition (Fig. 4(c)). However, this leads to a non-negative output from the transform F, while intuitively a âresidualâ func- tion should take values in (ââ, +â). As a result, the forward propagated sig- nal is monotonically increasing. This may impact the representational ability, and the result is worse (7.84%, Table 2) than the baseline. We expect to have a residual function taking values in (ââ, +â). This condition is satisï¬ed by other Residual Units including the following ones. | 1603.05027#22 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 22 | Binary Dot Product: To approximate the dot product between X, W â Rn such that XTW â βHTαB, where H, B â {+1, â1}n and β, α â R+, we solve the following optimization:
aâ, B*, 8°, H« = argmin||X © W â baH © B]| (7) a,B,6,H
where © indicates element-wise product. We define Y ⬠Râ such that Y; = X;,W,, Ce {+1, -1}â such that C; = H;B; and y ⬠Rt such that y = Ba. The equation 7 can be written as:
y*,C* = argmin|/Y â yC|| (8) an
7
8 Rastegari et al. | 1603.05279#22 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 23 | Post-activation or pre-activation? In the original design (Eqn.(1) and Eqn.(2)), the activation xl+1 = f (yl) aï¬ects both paths in the next Residual Unit: yl+1 = f (yl) + F(f (yl), Wl+1). Next we develop an asymmetric form where an activation Ëf only aï¬ects the F path: yl+1 = yl + F( Ëf (yl), Wl+1), for any l (Fig. 5 (a) to (b)). By renaming the notations, we have the following form:
xl+1 = xl + F( Ëf (xl), Wl), . (9)
It is easy to see that Eqn.(9) is similar to Eqn.(4), and can enable a backward formulation similar to Eqn.(5). For this new Residual Unit as in Eqn.(9), the new after-addition activation becomes an identity mapping. This design means that if a new after-addition activation Ëf is asymmetrically adopted, it is equivalent to recasting Ëf as the pre-activation of the next Residual Unit. This is illustrated in Fig. 5.
9
10 | 1603.05027#23 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 23 | y*,C* = argmin|/Y â yC|| (8) an
7
8 Rastegari et al.
(1) Binarizing Weight = - he f . foao-w 5 |Wla-a ine pB sign(W) (2) Binarizing Input saver nlX Inefficient aman MXalla= Bo" Redundant computations in overlapping areas . fn Efficient B ~p2 (4) Convolution with XNOR-Bitcount 0.2 0.1 3 01" meee , ~ dtn dln nines â1405. 0.2 2", * Ree = 1.40, « 1alay y fea -0.5 3... -1.2 0.2" aA1âwar Tf Ww . sign(W) K iif sign(I)
Fig. 2: This ï¬gure illustrates the procedure explained in section 3.2 for approximating a convo- lution using binary operations.
the optimal solutions can be achieved from equation 2 as follow
C* = sign(Y) = sign(X) © sign(W) = H* © B* (9)
Since |Xi|, |Wi| are independent, knowing that Yi = XiWi then, E [|Yi|] = E [|Xi||Wi|] = E [|Xi|] E [|Wi|] therefore, | 1603.05279#23 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 24 | 9
10
- act Pact original Fo + T Residual walght asymmet weight weight Unit output q act. activation act â weight weight weight = â pre-activation | ~~. Ge I Residual Unit a weight weight 4 act. act. act. t weight weight weight el ee 7 addition addition i addi act. M + adopt output activation . â PE output equivalent to â only to weight path (a) (b) ()
Figure 5. Using asymmetric after-addition activation is equivalent to constructing a pre-activation Residual Unit.
Table 3. Classiï¬cation error (%) on the CIFAR-10/100 test set using the original Residual Units and our pre-activation Residual Units.
dataset network ResNet-110 (1layer skip) 9.90 8.91 CIFAR-10 ResNet-110 ResNet-164 6.61 5.93 6.37 5.46 ResNet-1001 7.61 4.92 ResNet-164 ResNet-1001 25.16 27.82 24.33 22.71 | 1603.05027#24 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05027 | 25 | The distinction between post-activation/pre-activation is caused by the pres- ence of the element-wise addition. For a plain network that has N layers, there are N â 1 activations (BN/ReLU), and it does not matter whether we think of them as post- or pre-activations. But for branched layers merged by addition, the position of activation matters.
and (ii) full pre-activation (Fig. 4(e)) where BN and ReLU are both adopted be- fore weight layers. Table 2 shows that the ReLU-only pre-activation performs very similar to the baseline on ResNet-110/164. This ReLU layer is not used in conjunction with a BN layer, and may not enjoy the beneï¬ts of BN [8].
Somehow surprisingly, when BN and ReLU are both used as pre-activation, the results are improved by healthy margins (Table 2 and Table 3). In Table 3 we report results using various architectures: (i) ResNet-110, (ii) ResNet-164, (iii) a 110-layer ResNet architecture in which each shortcut skips only 1 layer (i.e., | 1603.05027#25 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 25 | Binary Convolution: Convolving weight filter W ⬠RXâ! (where win > w, hin > h) with the input tensor I ⬠R°*â i» requires computing the scaling factor ( for all possible sub-tensors in I with same size as W. Two of these sub-tensors are illustrated in figure 2 (second row) by X; and X»g. Due to overlaps between subtensors, comput- ing £ for all possible sub-tensors leads to a large number of redundant computations. To overcome this redundancy, first, we compute a matrix A = Dlbsl which is the average over absolute values of the elements in the input I across the channel. Then we convolve A with a 2D filterk ⬠Râ*", K = A «xk, where Vij kij = oe K contains scaling factors 6 for all sub-tensors in the input I. K;; corresponds to 6 for a sub-tensor centered at the location ij (across width and height). This procedure is shown in the third row of figure 2. Once we obtained the scaling factor a for the weight and for all sub-tensors in I (denoted by K), we can approximate the convolution between input I and weight filter W mainly using binary operations:
I* W & (sign(I) @ sign(W)) © Ka dd) | 1603.05279#25 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 26 | ââ 164, original â 164, proposed (pre-activation) wei . si SiN apa * - aL err âwaa, 5 110 original âthy © Se om ââ 110, BNafter add eH it oth ae i eid - 0 5 6 10° 0.002 0 0.002 ° T 2 3 4 5 6 ° 1 2 3 4 x10" Iterations x Iterations
Figure 6. Training curves on CIFAR-10. Left: BN after addition (Fig. 4(b)) using ResNet-110. Right: pre-activation unit (Fig. 4(e)) on ResNet-164. Solid lines denote test error, and dashed lines denote training loss.
a Residual Unit has only 1 layer), denoted as âResNet-110(1layer)â, and (iv) a 1001-layer bottleneck architecture that has 333 Residual Units (111 on each feature map size), denoted as âResNet-1001â. We also experiment on CIFAR- 100. Table 3 shows that our âpre-activationâ models are consistently better than the baseline counterparts. We analyze these results in the following.
# 4.2 Analysis | 1603.05027#26 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 26 | I* W & (sign(I) @ sign(W)) © Ka dd)
where ® indicates a convolutional operation using XNOR and bitcount operations. This is illustrated in the last row in figure 2. Note that the number of non-binary operations is very small compared to binary operations.
XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks
>| | 2s > f= >| la â¬)|/6 mile ||) Be} SHEHE angip a oy} fal) ty} y* 3||s| |e Atypical block in CNN A block in XNOR-Net
Fig. 3: This ï¬gure contrasts the block structure in our XNOR-Network (right) with a typical CNN (left). | 1603.05279#26 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 27 | # 4.2 Analysis
We ï¬nd the impact of pre-activation is twofold. First, the optimization is further eased (comparing with the baseline ResNet) because f is an identity mapping. Second, using BN as pre-activation improves regularization of the models.
Ease of optimization. This eï¬ect is particularly obvious when training the 1001-layer ResNet. Fig. 1 shows the curves. Using the original design in [1], the training error is reduced very slowly at the beginning of training. For f = ReLU, the signal is impacted if it is negative, and when there are many Residual Units, this eï¬ect becomes prominent and Eqn.(3) (so Eqn.(5)) is not a good approximation. On the other hand, when f is an identity mapping, the signal can be propagated directly between any two units. Our 1001-layer network reduces the training loss very quickly (Fig. 1). It also achieves the lowest loss among all models we investigated, suggesting the success of optimization. | 1603.05027#27 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 27 | Training XNOR-Networks: A typical block in CNN contains several different layers. Figure 3 (left) illustrates a typical block in a CNN. This block has four layers in the following order: 1-Convolutional, 2-Batch Normalization, 3-Activation and 4-Pooling. Batch Normalization layer[43] normalizes the input batch by its mean and variance. The activation is an element-wise non-linear function (e.g.,Sigmoid, ReLU). The pool- ing layer applies any type of pooling (e.g.,max,min or average) on the input batch. Applying pooling on binary input results in signiï¬cant loss of information. For exam- ple, max-pooling on binary input returns a tensor that most of its elements are equal to +1. Therefore, we put the pooling layer after the convolution. To further decrease the information loss due to binarization, we normalize the input before binarization. This ensures the data to hold zero mean, therefore, thresholding at zero leads to less quanti- zation error. The order of layers in a block of binary CNN is shown in Figure 3(right). The binary activation layer(BinActiv) computes K and sign(I) as | 1603.05279#27 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 28 | We also ï¬nd that the impact of f = ReLU is not severe when the ResNet has fewer layers (e.g., 164 in Fig. 6(right)). The training curve seems to suï¬er a little bit at the beginning of training, but goes into a healthy status soon. By monitoring the responses we observe that this is because after some training, the weights are adjusted into a status such that yl in Eqn.(1) is more frequently above zero and f does not truncate it (xl is always non-negative due to the pre- vious ReLU, so yl is below zero only when the magnitude of F is very negative). The truncation, however, is more frequent when there are 1000 layers.
11
12 | 1603.05027#28 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 28 | error. The order of layers in a block of binary CNN is shown in Figure 3(right). The binary activation layer(BinActiv) computes K and sign(I) as explained in sec- tion 3.2. In the next layer (BinConv), given K and sign(I), we compute binary convo- lution by equation 11. Then at the last layer (Pool), we apply the pooling operations. We can insert a non-binary activation(e.g.,ReLU) after binary convolution. This helps when we use state-of-the-art networks (e.g.,AlexNet or VGG). | 1603.05279#28 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 29 | 11
12
Table 4. Comparisons with state-of-the-art methods on CIFAR-10 and CIFAR-100 using âmoderate data augmentationâ (ï¬ip/translation), except for ELU [12] with no augmentation. Better results of [13,14] have been reported using stronger data augmen- tation and ensembling. For the ResNets we also report the number of parameters. Our results are the median of 5 runs with mean±std in the brackets. All ResNets results are obtained with a mini-batch size of 128 except â with a mini-batch size of 64 (code available at https://github.com/KaimingHe/resnet-1k-layers). | 1603.05027#29 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 29 | Once we have the binary CNN structure, the training algorithm would be the same as algorithm 1.
Binary Gradient: The computational bottleneck in the backward pass at each layer is computing a convolution between weight ï¬lters(w) and the gradients with respect of the inputs (gin). Similar to binarization in the forward pass, we can binarize gin in the backward pass. This leads to a very efï¬cient training procedure using binary operations. Note that if we use equation 6 to compute the scaling factor for gin, the direction of maximum change for SGD would be diminished. To preserve the maximum change in all dimensions, we use maxi(|gin i
|) as the scaling factor. k-bit Quantization: So far, we showed 1-bit quantization of weights and inputs using sign(x) function. One can easily extend the quantization level to k-bits by using qk(x) = 2( [(2kâ1)( x+1 2 )] 2 ) instead of the sign function. Where [.] indicates rounding operation and x â [â1, 1].
# 4 Experiments | 1603.05279#29 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 30 | CIFAR-10 error (%) CIFAR-100 error (%) NIN [15] 8.81 NIN [15] 35.68 DSN [16] 8.22 DSN [16] 34.57 FitNet [17] 8.39 FitNet [17] 35.04 Highway [7] 7.72 Highway [7] 32.39 All-CNN [14] 7.25 All-CNN [14] 33.71 ELU [12] 6.55 ELU [12] 24.28 FitResNet, LSUV [18] 5.84 FitNet, LSUV [18] 27.66 ResNet-110 [1] (1.7M) 6.61 ResNet-164 [1] (1.7M) 25.16 ResNet-1202 [1] (19.4M) 7.93 ResNet-1001 [1] (10.2M) 27.82 ResNet-164 [ours] (1.7M) 5.46 ResNet-164 [ours] (1.7M) 24.33 4.92 (4.89±0.14) ResNet-1001 [ours] (10.2M) ResNet-1001 [ours] (10.2M)â 4.62 (4.69±0.20) ResNet-1001 [ours] (10.2M) | 1603.05027#30 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 30 | # 4 Experiments
We evaluate our method by analyzing its efï¬ciency and accuracy. We measure the ef- ï¬ciency by computing the computational speedup (in terms of number of high preci- sion operation) achieved by our binary convolution vs. standard convolution. To mea9
10 Rastegari et al.
4cB = Double Precision "Binary Precision v 4008 HeMe_ews1.sqe | W7.4Me. ° VoG-419 ResNet18 AlexNet
(a) (b) (c)
Fig. 4: This ï¬gure shows the efï¬ciency of binary convolutions in terms of memory(a) and computation(b-c). (a) is contrasting the required memory for binary and double precision weights in three different architectures(AlexNet, ResNet-18 and VGG-19). (b,c) Show speedup gained by binary convolution under (b)-different number of channels and (c)-different ï¬lter size | 1603.05279#30 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05279 | 31 | sure accuracy, we perform image classiï¬cation on the large-scale ImageNet dataset. This paper is the ï¬rst work that evaluates binary neural networks on the ImageNet dataset. Our binarization technique is general, we can use any CNN architecture. We evaluate AlexNet [1] and two deeper architectures in our experiments. We compare our method with two recent works on binarizing neural networks; BinaryConnect [38] and BinaryNet [11]. The classiï¬cation accuracy of our binary-weight-network version of AlexNet is as accurate as the full precision version of AlexNet. This classiï¬cation ac- curacy outperforms competitors on binary neural networks by a large margin. We also present an ablation study, where we evaluate the key elements of our proposed method; computing scaling factors and our block structure for binary CNN. We shows that our method of computing the scaling factors is important to reach high accuracy.
# 4.1 Efï¬ciency Analysis | 1603.05279#31 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 32 | Reducing overï¬tting. Another impact of using the proposed pre-activation unit is on regularization, as shown in Fig. 6 (right). The pre-activation ver- sion reaches slightly higher training loss at convergence, but produces lower test error. This phenomenon is observed on ResNet-110, ResNet-110(1-layer), and ResNet-164 on both CIFAR-10 and 100. This is presumably caused by BNâs reg- ularization eï¬ect [8]. In the original Residual Unit (Fig. 4(a)), although the BN normalizes the signal, this is soon added to the shortcut and thus the merged signal is not normalized. This unnormalized signal is then used as the input of the next weight layer. On the contrary, in our pre-activation version, the inputs to all weight layers have been normalized.
# 5 Results | 1603.05027#32 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 32 | # 4.1 Efï¬ciency Analysis
In an standard convolution, the total number of operations is cNWNI, where c is the number of channels, NW = wh and NI = winhin. Note that some modern CPUs can fuse the multiplication and addition as a single cycle operation. On those CPUs, Binary- Weight-Networks does not deliver speed up. Our binary approximation of convolution (equation 11) has cNWNI binary operations and NI non-binary operations. With the current generation of CPUs, we can perform 64 binary operations in one clock of CPU, therefore the speedup can be computed by S =
# cNWNI 1 64 cNWNI+NI | 1603.05279#32 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 33 | # 5 Results
Comparisons on CIFAR-10/100. Table 4 compares the state-of-the-art meth- ods on CIFAR-10/100, where we achieve competitive results. We note that we do not specially tailor the network width or ï¬lter sizes, nor use regularization techniques (such as dropout) which are very eï¬ective for these small datasets. We obtain these results via a simple but essential concept â going deeper. These results demonstrate the potential of pushing the limits of depth.
Comparisons on ImageNet. Next we report experimental results on the 1000- class ImageNet dataset [3]. We have done preliminary experiments using the skip connections studied in Fig. 2 & 3 on ImageNet with ResNet-101 [1], and observed similar optimization diï¬culties. The training error of these non-identity shortcut networks is obviously higher than the original ResNet at the ï¬rst learning rate | 1603.05027#33 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 33 | # cNWNI 1 64 cNWNI+NI
The speedup depends on the channel size and ï¬lter size but not the input size. In ï¬g- ure 4-(b-c) we illustrate the speedup achieved by changing the number of channels and ï¬lter size. While changing one parameter, we ï¬x other parameters as follows: c = 256, nI = 142 and nW = 32 (majority of convolutions in ResNet[4] architecture have this structure). Using our approximation of convolution we gain 62.27à theoretical speed up, but in our CPU implementation with all of the overheads, we achieve 58x speed up in one convolution (Excluding the process for memory allocation and memory ac- cess). With the small channel size (c = 3) and ï¬lter size (NW = 1 à 1) the speedup is not considerably high. This motivates us to avoid binarization at the ï¬rst and last
XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks | 1603.05279#33 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 34 | Table 5. Comparisons of single-crop error on the ILSVRC 2012 validation set. All ResNets are trained using the same hyper-parameters and implementations as [1]). Our Residual Units are the full pre-activation version (Fig. 4(e)). â : code/model avail- able at https://github.com/facebook/fb.resnet.torch/tree/master/pretrained, using scale and aspect ratio augmentation in [20].
method augmentation train crop test crop top-1 ResNet-152, original Residual Unit [1] ResNet-152, original Residual Unit [1] ResNet-152, pre-act Residual Unit ResNet-200, original Residual Unit [1] ResNet-200, pre-act Residual Unit ResNet-200, pre-act Residual Unit Inception v3 [19] 6.7 5.5 5.5 6.0 5.3 scale+asp ratio 224Ã224 320Ã320 20.1â 4.8â 5.6 scale+asp ratio 299Ã299 299Ã299 scale scale scale scale scale 23.0 224Ã224 224Ã224 21.3 224Ã224 320Ã320 21.1 224Ã224 320Ã320 224Ã224 320Ã320 21.8 224Ã224 320Ã320 20.7 21.2 top-5 | 1603.05027#34 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 34 | XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks
layer of a CNN. In the ï¬rst layer the chanel size is 3 and in the last layer the ï¬lter size is 1 à 1. A similar strategy was used in [11]. Figure 4-a shows the required memory for three different CNN architectures(AlexNet, VGG-19, ResNet-18) with binary and double precision weights. Binary-weight-networks are so small that can be easily ï¬tted into portable devices. BinaryNet [11] is in the same order of memory and computation efï¬ciency as our method. In Figure 4, we show an analysis of computation and memory cost for a binary convolution. The same analysis is valid for BinaryNet and BinaryCon- nect. The key difference of our method is using a scaling-factor, which does not change the order of efï¬ciency while providing a signiï¬cant improvement in accuracy.
# Image Classiï¬cation | 1603.05279#34 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 35 | (similar to Fig. 3), and we decided to halt training due to limited resources. But we did ï¬nish a âBN after additionâ version (Fig. 4(b)) of ResNet-101 on ImageNet and observed higher training loss and validation error. This modelâs single-crop (224Ã224) validation error is 24.6%/7.5%, vs. the original ResNet- 101âs 23.6%/7.1%. This is in line with the results on CIFAR in Fig. 6 (left).
Table 5 shows the results of ResNet-152 [1] and ResNet-2003, all trained from scratch. We notice that the original ResNet paper [1] trained the models using scale jittering with shorter side s â [256, 480], and so the test of a 224Ã224 crop on s = 256 (as did in [1]) is negatively biased. Instead, we test a single 320Ã320 crop from s = 320, for all original and our ResNets. Even though the ResNets are trained on smaller crops, they can be easily tested on larger crops because the ResNets are fully convolutional by design. This size is also close to 299Ã299 used by Inception v3 [19], allowing a fairer comparison. | 1603.05027#35 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 35 | We evaluate the performance of our proposed approach on the task of natural im- age classiï¬cation. So far, in the literature, binary neural network methods have pre- sented their evaluations on either limited domain or simpliï¬ed datasets e.g.,CIFAR-10, MNIST, SVHN. To compare with state-of-the-art vision, we evaluate our method on ImageNet (ILSVRC2012). ImageNet has â¼1.2M train images from 1K categories and 50K validation images. The images in this dataset are natural images with reasonably high resolution compared to the CIFAR and MNIST dataset, which have relatively small images. We report our classiï¬cation performance using Top-1 and Top-5 accuracies. We adopt three different CNN architectures as our base architectures for binarization: AlexNet [1], Residual Networks (known as ResNet) [4], and a variant of GoogLenet [3].We compare our Binary-weight-network (BWN) with BinaryConnect(BC) [38] and our XNOR-Networks(XNOR-Net) with BinaryNeuralNet(BNN) [11]. BinaryConnect(BC) is a method for training | 1603.05279#35 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 36 | The original ResNet-152 [1] has top-1 error of 21.3% on a 320Ã320 crop, and our pre-activation counterpart has 21.1%. The gain is not big on ResNet-152 because this model has not shown severe generalization diï¬culties. However, the original ResNet-200 has an error rate of 21.8%, higher than the baseline ResNet-152. But we ï¬nd that the original ResNet-200 has lower training error than ResNet-152, suggesting that it suï¬ers from overï¬tting.
Our pre-activation ResNet-200 has an error rate of 20.7%, which is 1.1% lower than the baseline ResNet-200 and also lower than the two versions of ResNet-152. When using the scale and aspect ratio augmentation of [20,19], our ResNet-200 has a result better than Inception v3 [19] (Table 5). Concurrent with our work, an Inception-ResNet-v2 model [21] achieves a single-crop result of 19.9%/4.9%. We expect our observations and the proposed Residual Unit will help this type and generally other types of ResNets.
Computational Cost. Our modelsâ computational complexity is linear on | 1603.05027#36 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 36 | [38] and our XNOR-Networks(XNOR-Net) with BinaryNeuralNet(BNN) [11]. BinaryConnect(BC) is a method for training a deep neural network with binary weights during forward and backward propagations. Similar to our approach, they keep the real-value weights during the updating parameters step. Our binarization is different from BC. The bina- rization in BC can be either deterministic or stochastic. We use the deterministic bina- rization for BC in our comparisons because the stochastic binarization is not efï¬cient. The same evaluation settings have been used and discussed in [11]. BinaryNeural- Net(BNN) [11] is a neural network with binary weights and activations during infer- ence and gradient computation in training. In concept, this is a similar approach to our XNOR-Network but the binarization method and the network structure in BNN is dif- ferent from ours. Their training algorithm is similar to BC and they used deterministic binarization in their evaluations. | 1603.05279#36 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 37 | Computational Cost. Our modelsâ computational complexity is linear on
3 The ResNet-200 has 16 more 3-layer bottleneck Residual Units than ResNet-152, which are added on the feature map of 28Ã28.
13
14
depth (so a 1001-layer net is â¼10Ã complex of a 100-layer net). On CIFAR, ResNet-1001 takes about 27 hours to train on 2 GPUs; on ImageNet, ResNet- 200 takes about 3 weeks to train on 8 GPUs (on par with VGG nets [22]).
# 6 Conclusions
This paper investigates the propagation formulations behind the connection mechanisms of deep residual networks. Our derivations imply that identity short- cut connections and identity after-addition activation are essential for making information propagation smooth. Ablation experiments demonstrate phenom- ena that are consistent with our derivations. We also present 1000-layer deep networks that can be easily trained and achieve improved accuracy. | 1603.05027#37 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 37 | CIFAR-10 : BC and BNN showed near state-of-the-art performance on CIFAR- 10, MNIST, and SVHN dataset. BWN and XNOR-Net on CIFAR-10 using the same network architecture as BC and BNN achieve the error rate of 9.88% and 10.17% re- spectively. In this paper we explore the possibility of obtaining near state-of-the-art results on a much larger and more challenging dataset (ImageNet).
[1] is a CNN architecture with 5 convolutional layers and two fully- connected layers. This architecture was the ï¬rst CNN architecture that showed to be successful on ImageNet classiï¬cation task. This network has 61M parameters. We use AlexNet coupled with batch normalization layers [43].
Train: In each iteration of training, images are resized to have 256 pixel at their smaller dimension and then a random crop of 224 Ã 224 is selected for training. We run
12 Rastegari et al.
Top-1, Binary-Weight âTop-5, Binary-Weight-Input 0 10 20 Number of epochs Number of epochs Number of epochs Number of epochs 20 | 1603.05279#37 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 38 | Appendix: Implementation Details The implementation details and hyper- parameters are the same as those in [1]. On CIFAR we use only the translation and ï¬ipping augmentation in [1] for training. The learning rate starts from 0.1, and is divided by 10 at 32k and 48k iterations. Following [1], for all CIFAR experiments we warm up the training by using a smaller learning rate of 0.01 at the beginning 400 iterations and go back to 0.1 after that, although we remark that this is not necessary for our proposed Residual Unit. The mini-batch size is 128 on 2 GPUs (64 each), the weight decay is 0.0001, the momentum is 0.9, and the weights are initialized as in [23].
On ImageNet, we train the models using the same data augmentation as in [1]. The learning rate starts from 0.1 (no warming up), and is divided by 10 at 30 and 60 epochs. The mini-batch size is 256 on 8 GPUs (32 each). The weight decay, momentum, and weight initialization are the same as above. | 1603.05027#38 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 38 | Top-1, Binary-Weight âTop-5, Binary-Weight-Input 0 10 20 Number of epochs Number of epochs Number of epochs Number of epochs 20
Fig. 5: This ï¬gure compares the imagenet classiï¬cation accuracy on Top-1 and Top-5 across training epochs. Our approaches BWN and XNOR-Net outperform BinaryConnect(BC) and Bi- naryNet(BNN) in all the epochs by large margin(â¼17%).
Classiï¬cation Accuracy(%) Binary-Weight Binary-Input-Binary-Weight Full-Precision BNN[11] XNOR-Net AlexNet[1] Top-1 Top-5 Top-1 Top-5 Top-1 Top-5 Top-1 Top-5 Top-1 Top-5 56.8 79.4 35.4 61.0 80.2 BWN BC[11] 50.42 56.6
44.2 69.2 27.9 Table 1: This table compares the ï¬nal accuracies (Top1 - Top5) of the full precision network with our binary precision networks; Binary-Weight-Networks(BWN) and XNOR-Networks(XNOR- Net) and the competitor methods; BinaryConnect(BC) and BinaryNet(BNN). | 1603.05279#38 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 39 | When using the pre-activation Residual Units (Fig. 4(d)(e) and Fig. 5), we pay special attention to the ï¬rst and the last Residual Units of the entire net- work. For the ï¬rst Residual Unit (that follows a stand-alone convolutional layer, conv1), we adopt the ï¬rst activation right after conv1 and before splitting into two paths; for the last Residual Unit (followed by average pooling and a fully- connected classiï¬er), we adopt an extra activation right after its element-wise addition. These two special cases are the natural outcome when we obtain the pre-activation network via the modiï¬cation procedure as shown in Fig. 5.
The bottleneck Residual Units (for ResNet-164/1001 on CIFAR) are con- structed following [1]. For example, a [x2 | unit in ResNet-110 is replaced witha 3x3 | unit in ResNet-164, both of which have roughly the same num- 1x1, 64 ber of parameters. For the bottleneck ResNets, when reducing the feature map size we use projection shortcuts [1] for increasing dimensions, and when pre- activation is used, these projection shortcuts are also with pre-activation.
# References | 1603.05027#39 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 39 | the training algorithm for 16 epochs with batche size equal to 512. We use negative-log- likelihood over the soft-max of the outputs as our classiï¬cation loss function. In our implementation of AlexNet we do not use the Local-Response-Normalization(LRN) layer3. We use SGD with momentum=0.9 for updating parameters in BWN and BC. For XNOR-Net and BNN we used ADAM [42]. ADAM converges faster and usually achieves better accuracy for binary inputs [11]. The learning rate starts at 0.1 and we apply a learning-rate-decay=0.01 every 4 epochs. | 1603.05279#39 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 40 | # References
1. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR. (2016)
2. Nair, V., Hinton, G.E.: Rectiï¬ed linear units improve restricted boltzmann ma- chines. In: ICML. (2010)
3. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet Large Scale Visual Recognition Challenge. IJCV (2015)
4. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll´ar, P., Zitnick, C.L.: Microsoft COCO: Common objects in context. In: ECCV. (2014) 5. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural computation
(1997)
6. Srivastava, R.K., Greï¬, K., Schmidhuber, J.: Highway networks. In: ICML work- shop. (2015) | 1603.05027#40 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 40 | Test: At inference time, we use the 224 à 224 center crop for forward propagation. Figure 5 demonstrates the classiï¬cation accuracy for training and inference along the training epochs for top-1 and top-5 scores. The dashed lines represent training ac- curacy and solid lines shows the validation accuracy. In all of the epochs our method outperforms BC and BNN by large margin (â¼17%). Table 1 compares our ï¬nal accu- racy with BC and BNN. We found that the scaling factors for the weights (α) is much more effective than the scaling factors for the inputs (β). Removing β reduces the ac- curacy by a small margin (less than 1% top-1 alexnet).
Binary Gradient: Using XNOR-Net with binary gradient the accuracy of top-1 will drop only by 1.4%.
Residual Net : We use the ResNet-18 proposed in [4] with short-cut type B.4 Train: In each training iteration, images are resized randomly between 256 and 480 pixel on the smaller dimension and then a random crop of 224 Ã 224 is selected for training. We run the training algorithm for 58 epochs with batch size equal to 256 | 1603.05279#40 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 41 | 6. Srivastava, R.K., Greï¬, K., Schmidhuber, J.: Highway networks. In: ICML work- shop. (2015)
7. Srivastava, R.K., Greï¬, K., Schmidhuber, J.: Training very deep networks. NIPS. (2015) In:
8. Ioï¬e, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: ICML. (2015)
9. LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural computation (1989)
10. Krizhevsky, A.: Learning multiple layers of features from tiny images. Tech Report (2009)
11. Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: feature detectors. Improving neural networks by preventing co-adaptation of arXiv:1207.0580 (2012)
12. Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network | 1603.05027#41 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 41 | 3 Our implementation is followed by https://gist.github.com/szagoruyko/dd032c529048492630fc 4 We used the Torch implementation in https://github.com/facebook/fb.resnet.torch
# XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks
(a) (b)
Fig. 6: This ï¬gure shows the classiï¬cation accuracy; (a)Top-1 and (b)Top-5 measures across the training epochs on ImageNet dataset by Binary-Weight-Network and XNOR-Network using ResNet-18.
Network Variations Binary-Weight-Network XNOR-Network Full-Precision-Network ResNet-18 top-5 83.0 73.2 89.2 top-1 60.8 51.2 69.3 GoogLenet top-5 86.1 N/A 90.0 top-1 65.5 N/A 71.3
Table 2: This table compares the ï¬nal classiï¬cation accuracy achieved by our binary precision networks with the full precision network in ResNet-18 and GoogLenet architectures.
images. The learning rate starts at 0.1 and we use the learning-rate-decay equal to 0.01 at epochs number 30 and 40. | 1603.05279#41 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 42 | 12. Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network
learning by exponential linear units (ELUs). In: ICLR. (2016) 13. Graham, B.: Fractional max-pooling. arXiv:1412.6071 (2014) 14. Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: The all convolutional net. arXiv:1412.6806 (2014)
15. Lin, M., Chen, Q., Yan, S.: Network in network. In: ICLR. (2014) 16. Lee, C.Y., Xie, S., Gallagher, P., Zhang, Z., Tu, Z.: Deeply-supervised nets. In:
AISTATS. (2015)
17. Romero, A., Ballas, N., Kahou, S.E., Chassang, A., Gatta, C., Bengio, Y.: Fitnets: Hints for thin deep nets. In: ICLR. (2015) | 1603.05027#42 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 42 | images. The learning rate starts at 0.1 and we use the learning-rate-decay equal to 0.01 at epochs number 30 and 40.
Test: At inference time, we use the 224 à 224 center crop for forward propagation. Figure 6 demonstrates the classiï¬cation accuracy (Top-1 and Top-5) along the epochs for training and inference. The dashed lines represent training and the solid lines repre- sent inference. Table 2 shows our ï¬nal accuracy by BWN and XNOR-Net.
GoogLenet Variant : We experiment with a variant of GoogLenet [3] that uses a similar number of parameters and connections but only straightforward convolutions, no branching5. It has 21 convolutional layers with ï¬lter sizes alternating between 1 à 1 and 3 à 3.
Train: Images are resized randomly between 256 and 320 pixel on the smaller di- mension and then a random crop of 224 à 224 is selected for training. We run the training algorithm for 80 epochs with batch size of 128. The learning rate starts at 0.1 and we use polynomial rate decay, β = 4.
Test: At inference time, we use a center crop of 224 Ã 224.
# 4.3 Ablation Studies | 1603.05279#42 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05027 | 43 | 18. Mishkin, D., Matas, J.: All you need is a good init. In: ICLR. (2016) 19. Szegedy, C., Vanhoucke, V., Ioï¬e, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: CVPR. (2016)
20. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR. (2015) 21. Szegedy, C., Ioï¬e, S., Vanhoucke, V.: Inception-v4, inception-resnet and the impact
of residual connections on learning. arXiv:1602.07261 (2016)
22. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR. (2015)
23. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectiï¬ers: Surpassing human- level performance on imagenet classiï¬cation. In: ICCV. (2015)
15 | 1603.05027#43 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | [
{
"id": "1602.07261"
}
] |
1603.05279 | 43 | Test: At inference time, we use a center crop of 224 Ã 224.
# 4.3 Ablation Studies
There are two key differences between our method and the previous network binariaza- tion methods; the binararization technique and the block structure in our binary CNN.
5 We used the Darknet [44] implementation: http://pjreddie.com/darknet/imagenet/#extraction
13
14
14 Rastegari et al.
Binary-Weight-Network top-1 56.8 46.2 Strategy for computing α Using equation 6 Using a separate layer top-5 79.4 69.5 XNOR-Network top-1 30.3 44.2 Block Structure C-B-A-P B-A-C-P top-5 57.5 69.2
(a)
(b)
Table 3: In this table, we evaluate two key elements of our approach; computing the optimal scaling factors and specifying the right order for layers in a block of CNN with binary input. (a) demonstrates the importance of the scaling factor in training binary-weight-networks and (b) shows that our way of ordering the layers in a block of CNN is crucial for training XNOR- Networks. C,B,A,P stands for Convolutional, BatchNormalization, Acive function (here binary activation), and Pooling respectively. | 1603.05279#43 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05279 | 44 | For binarization, we ï¬nd the optimal scaling factors at each iteration of training. For the block structure, we order the layers in a block in a way that decreases the quantiza- tion loss for training XNOR-Net. Here, we evaluate the effect of each of these elements in the performance of the binary networks. Instead of computing the scaling factor α using equation 6, one can consider α as a network parameter. In other words, a layer after binary convolution multiplies the output of convolution by an scalar parameter for each ï¬lter. This is similar to computing the afï¬ne parameters in batch normalization. Table 3-a compares the performance of a binary network with two ways of computing the scaling factors. As we mentioned in section 3.2 the typical block structure in CNN is not suitable for binarization. Table 3-b compares the standard block structure C-B-A-P (Convolution, Batch Normalization, Activation, Pooling) with our structure B-A-C-P. (A, is binary activation).
# 5 Conclusion | 1603.05279#44 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05279 | 45 | # 5 Conclusion
We introduce simple, efï¬cient, and accurate binary approximations for neural networks. We train a neural network that learns to ï¬nd binary values for weights, which reduces the size of network by â¼ 32à and provide the possibility of loading very deep neural networks into portable devices with limited memory. We also propose an architecture, XNOR-Net, that uses mostly bitwise operations to approximate convolutions. This pro- vides â¼ 58à speed up and enables the possibility of running the inference of state of the art deep neural network on CPU (rather than GPU) in real-time.
# Acknowledgements
This work is in part supported by ONR N00014-13-1-0720, NSF IIS- 1338054, Allen Distinguished Investigator Award, and the Allen Institute for Artiï¬cial Intelligence.
XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks
# References
1. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classiï¬cation with deep convolutional neural networks. In: Advances in neural information processing systems. (2012) 1097â1105 1, 10, 11, 12 | 1603.05279#45 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05279 | 46 | 2. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recog- nition. arXiv preprint arXiv:1409.1556 (2014) 1
3. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2015) 1â9 1, 4, 11, 13
4. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR (2015) 1, 4, 10, 11, 12
5. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2014) 580â587 1
6. Girshick, R.: Fast r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision. (2015) 1440â1448 1 | 1603.05279#46 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05279 | 47 | 6. Girshick, R.: Fast r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision. (2015) 1440â1448 1
7. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems. (2015) 91â99 1
8. Oculus, V.: Oculus rift-virtual reality headset for 3d gaming. URL: http://www. oculusvr. com (2012) 1
9. Gottmer, M.: Merging reality and virtuality with microsoft hololens. (2015) 1 10. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmenta- tion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2015) 3431â3440 2
11. Courbariaux, M., Bengio, Y.: Binarynet: Training deep neural networks with weights and activations constrained to +1 or -1. CoRR (2016) 2, 3, 4, 6, 7, 10, 11, 12 | 1603.05279#47 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05279 | 48 | 12. Denil, M., Shakibi, B., Dinh, L., de Freitas, N., et al.: Predicting parameters in deep learning. In: Advances in Neural Information Processing Systems. (2013) 2148â2156 3
13. Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4) (1989) 303â314 3
14. Seide, F., Li, G., Yu, D.: Conversational speech transcription using context-dependent deep neural networks. In: Interspeech. (2011) 437â440 3
15. Dauphin, Y.N., Bengio, Y.: arXiv:1301.3583 (2013) 3 Big neural networks waste capacity. arXiv preprint
16. Ba, J., Caruana, R.: Do deep nets really need to be deep? In: Advances in neural information processing systems. (2014) 2654â2662 3
17. Hanson, S.J., Pratt, L.Y.: Comparing biases for minimal network construction with back- propagation. In: Advances in neural information processing systems. (1989) 177â185 3 18. LeCun, Y., Denker, J.S., Solla, S.A., Howard, R.E., Jackel, L.D.: Optimal brain damage. In: | 1603.05279#48 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05279 | 49 | NIPs. Volume 89. (1989) 3
19. Hassibi, B., Stork, D.G.: Second order derivatives for network pruning: Optimal brain sur- geon. Morgan Kaufmann (1993) 3
20. Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efï¬cient neural network. In: Advances in Neural Information Processing Systems. (2015) 1135â1143 3
21. Van Nguyen, H., Zhou, K., Vemulapalli, R.: Cross-domain synthesis of medical images using In: Medical Image Computing and Computer- efï¬cient location-sensitive deep network. Assisted InterventionâMICCAI 2015. Springer (2015) 677â684 3
15
16 Rastegari et al.
16
22. Han, S., Mao, H., Dally, W.J.: Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 (2015) 3
23. Chen, W., Wilson, J.T., Tyree, S., Weinberger, K.Q., Chen, Y.: Compressing neural networks with the hashing trick. arXiv preprint arXiv:1504.04788 (2015) 3 | 1603.05279#49 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05279 | 50 | 24. Denton, E.L., Zaremba, W., Bruna, J., LeCun, Y., Fergus, R.: Exploiting linear structure within convolutional networks for efï¬cient evaluation. In: Advances in Neural Information Processing Systems. (2014) 1269â1277 3
25. Jaderberg, M., Vedaldi, A., Zisserman, A.: Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866 (2014) 3
26. Lin, M., Chen, Q., Yan, S.: Network in network. arXiv preprint arXiv:1312.4400 (2013) 4 27. Szegedy, C., Ioffe, S., Vanhoucke, V.: Inception-v4, inception-resnet and the impact of residual connections on learning. CoRR (2016) 4
28. Iandola, F.N., Moskewicz, M.W., Ashraf, K., Han, S., Dally, W.J., Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 1mb model size. arXiv preprint arXiv:1602.07360 (2016) 4 | 1603.05279#50 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05279 | 51 | 29. Gong, Y., Liu, L., Yang, M., Bourdev, L.: Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115 (2014) 4
30. Arora, S., Bhaskara, A., Ge, R., Ma, T.: Provable bounds for learning some deep representa- tions. arXiv preprint arXiv:1310.6343 (2013) 4
31. Vanhoucke, V., Senior, A., Mao, M.Z.: Improving the speed of neural networks on cpus. In: Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop. Volume 1. (2011) 4
32. Hwang, K., Sung, W.: Fixed-point feedforward deep neural network design using weights+ 1, 0, and- 1. In: Signal Processing Systems (SiPS), 2014 IEEE Workshop on, IEEE (2014) 1â6 4
33. Anwar, S., Hwang, K., Sung, W.: Fixed point optimization of deep convolutional neural In: Acoustics, Speech and Signal Processing (ICASSP), networks for object recognition. 2015 IEEE International Conference on, IEEE (2015) 1131â1135 4 | 1603.05279#51 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
1603.05279 | 52 | 34. Lin, Z., Courbariaux, M., Memisevic, R., Bengio, Y.: Neural networks with few multiplica- tions. arXiv preprint arXiv:1510.03009 (2015) 4
35. Courbariaux, M., Bengio, Y., David, J.P.: Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024 (2014) 4
36. Soudry, D., Hubara, I., Meir, R.: Expectation backpropagation: parameter-free training of In: Advances in Neural multilayer neural networks with continuous or discrete weights. Information Processing Systems. (2014) 963â971 4
37. Esser, S.K., Appuswamy, R., Merolla, P., Arthur, J.V., Modha, D.S.: Backpropagation for energy-efï¬cient neuromorphic computing. In: Advances in Neural Information Processing Systems. (2015) 1117â1125 4
38. Courbariaux, M., Bengio, Y., David, J.P.: Binaryconnect: Training deep neural networks with binary weights during propagations. In: Advances in Neural Information Processing Systems. (2015) 3105â3113 4, 6, 10, 11 | 1603.05279#52 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | [
{
"id": "1602.07360"
},
{
"id": "1502.03167"
},
{
"id": "1510.03009"
},
{
"id": "1504.04788"
},
{
"id": "1601.06071"
},
{
"id": "1510.00149"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.