doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1605.06431 | 26 | # 7 Conclusion
What is the reason behind residual networksâ increased performance? In the most recent iteration of residual networks, He et al. [6] provide one hypothesis: âWe obtain these results via a simple but essential conceptâgoing deeper.â While it is true that they are deeper than previous approaches, we present a complementary explanation. First, our unraveled view reveals that residual networks can be viewed as a collection of many paths, instead of a single ultra deep network. Second, we perform lesion studies to show that, although these paths are trained jointly, they do not strongly depend on each other. Moreover, they exhibit ensemble-like behavior in the sense that their performance smoothly correlates with the number of valid paths. Finally, we show that the paths through the network that contribute gradient during training are shorter than expected. In fact, deep paths are not required during training as they do not contribute any gradient. Thus, residual networks do not resolve the vanishing gradient problem by preserving gradient ï¬ow throughout the entire depth of the network. This insight reveals that depth is still an open research question. These promising observations provide a new lens through which to examine neural networks.
# Acknowledgements | 1605.06431#26 | Residual Networks Behave Like Ensembles of Relatively Shallow Networks | In this work we propose a novel interpretation of residual networks showing
that they can be seen as a collection of many paths of differing length.
Moreover, residual networks seem to enable very deep networks by leveraging
only the short paths during training. To support this observation, we rewrite
residual networks as an explicit collection of paths. Unlike traditional
models, paths through residual networks vary in length. Further, a lesion study
reveals that these paths show ensemble-like behavior in the sense that they do
not strongly depend on each other. Finally, and most surprising, most paths are
shorter than one might expect, and only the short paths are needed during
training, as longer paths do not contribute any gradient. For example, most of
the gradient in a residual network with 110 layers comes from paths that are
only 10-34 layers deep. Our results reveal one of the key characteristics that
seem to enable the training of very deep networks: Residual networks avoid the
vanishing gradient problem by introducing short paths which can carry gradient
throughout the extent of very deep networks. | http://arxiv.org/pdf/1605.06431 | Andreas Veit, Michael Wilber, Serge Belongie | cs.CV, cs.AI, cs.LG, cs.NE | NIPS 2016 | null | cs.CV | 20160520 | 20161027 | [
{
"id": "1603.09382"
},
{
"id": "1512.03385"
},
{
"id": "1603.05027"
},
{
"id": "1505.00387"
}
] |
1605.06431 | 27 | # Acknowledgements
We would like to thank Sam Kwak and Theofanis Karaletsos for insightful feedback. We also thank the reviewers of NIPS 2016 for their very constructive and helpful feedback and for suggesting the paper title. This work is partly funded by AOL through the Connected Experiences Laboratory (Author 1), an NSF Graduate Research Fellowship award (NSF DGE-1144153, Author 2), and a Google Focused Research award (Author 3).
8
# References
[1] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difï¬cult. IEEE Transactions on Neural Networks, 5(2):157â166, 1994. [2] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recognition, 2009. [3] Harris Drucker, Corinna Cortes, Lawrence D. Jackel, Yann LeCun, and Vladimir Vapnik.
Boosting and other ensemble methods. Neural Computation, 6(6):1289â1301, 1994. | 1605.06431#27 | Residual Networks Behave Like Ensembles of Relatively Shallow Networks | In this work we propose a novel interpretation of residual networks showing
that they can be seen as a collection of many paths of differing length.
Moreover, residual networks seem to enable very deep networks by leveraging
only the short paths during training. To support this observation, we rewrite
residual networks as an explicit collection of paths. Unlike traditional
models, paths through residual networks vary in length. Further, a lesion study
reveals that these paths show ensemble-like behavior in the sense that they do
not strongly depend on each other. Finally, and most surprising, most paths are
shorter than one might expect, and only the short paths are needed during
training, as longer paths do not contribute any gradient. For example, most of
the gradient in a residual network with 110 layers comes from paths that are
only 10-34 layers deep. Our results reveal one of the key characteristics that
seem to enable the training of very deep networks: Residual networks avoid the
vanishing gradient problem by introducing short paths which can carry gradient
throughout the extent of very deep networks. | http://arxiv.org/pdf/1605.06431 | Andreas Veit, Michael Wilber, Serge Belongie | cs.CV, cs.AI, cs.LG, cs.NE | NIPS 2016 | null | cs.CV | 20160520 | 20161027 | [
{
"id": "1603.09382"
},
{
"id": "1512.03385"
},
{
"id": "1603.05027"
},
{
"id": "1505.00387"
}
] |
1605.06431 | 28 | Boosting and other ensemble methods. Neural Computation, 6(6):1289â1301, 1994.
[4] Kunihiko Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4):193â202, 1980.
[5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
[6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016.
[7] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhut- dinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.
[8] Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Masterâs thesis, Institut fur Informatik, Technische Universitat, Munchen, 1991. | 1605.06431#28 | Residual Networks Behave Like Ensembles of Relatively Shallow Networks | In this work we propose a novel interpretation of residual networks showing
that they can be seen as a collection of many paths of differing length.
Moreover, residual networks seem to enable very deep networks by leveraging
only the short paths during training. To support this observation, we rewrite
residual networks as an explicit collection of paths. Unlike traditional
models, paths through residual networks vary in length. Further, a lesion study
reveals that these paths show ensemble-like behavior in the sense that they do
not strongly depend on each other. Finally, and most surprising, most paths are
shorter than one might expect, and only the short paths are needed during
training, as longer paths do not contribute any gradient. For example, most of
the gradient in a residual network with 110 layers comes from paths that are
only 10-34 layers deep. Our results reveal one of the key characteristics that
seem to enable the training of very deep networks: Residual networks avoid the
vanishing gradient problem by introducing short paths which can carry gradient
throughout the extent of very deep networks. | http://arxiv.org/pdf/1605.06431 | Andreas Veit, Michael Wilber, Serge Belongie | cs.CV, cs.AI, cs.LG, cs.NE | NIPS 2016 | null | cs.CV | 20160520 | 20161027 | [
{
"id": "1603.09382"
},
{
"id": "1512.03385"
},
{
"id": "1603.05027"
},
{
"id": "1505.00387"
}
] |
1605.06431 | 29 | [9] Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochastic depth. arXiv preprint arXiv:1603.09382, 2016.
[10] David H Hubel and Torsten N Wiesel. Receptive ï¬elds, binocular interaction and functional architecture in the catâs visual cortex. The Journal of Physiology, 160(1):106â154, 1962. [11] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, 2015.
[12] Alex Krizhevsky. Learning multiple layers of features from tiny images, 2009. [13] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2012. [14] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998. | 1605.06431#29 | Residual Networks Behave Like Ensembles of Relatively Shallow Networks | In this work we propose a novel interpretation of residual networks showing
that they can be seen as a collection of many paths of differing length.
Moreover, residual networks seem to enable very deep networks by leveraging
only the short paths during training. To support this observation, we rewrite
residual networks as an explicit collection of paths. Unlike traditional
models, paths through residual networks vary in length. Further, a lesion study
reveals that these paths show ensemble-like behavior in the sense that they do
not strongly depend on each other. Finally, and most surprising, most paths are
shorter than one might expect, and only the short paths are needed during
training, as longer paths do not contribute any gradient. For example, most of
the gradient in a residual network with 110 layers comes from paths that are
only 10-34 layers deep. Our results reveal one of the key characteristics that
seem to enable the training of very deep networks: Residual networks avoid the
vanishing gradient problem by introducing short paths which can carry gradient
throughout the extent of very deep networks. | http://arxiv.org/pdf/1605.06431 | Andreas Veit, Michael Wilber, Serge Belongie | cs.CV, cs.AI, cs.LG, cs.NE | NIPS 2016 | null | cs.CV | 20160520 | 20161027 | [
{
"id": "1603.09382"
},
{
"id": "1512.03385"
},
{
"id": "1603.05027"
},
{
"id": "1505.00387"
}
] |
1605.06431 | 30 | applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
[15] Jitendra Malik and Pietro Perona. Preattentive texture discrimination with early vision mecha- nisms. Journal of the Optical Society of America, 1990.
[16] Robert E Schapire. The strength of weak learnability. Machine Learning, 5(2):197â227, 1990. [17] Thomas Serre, Aude Oliva, and Tomaso Poggio. A feedforward architecture accounts for rapid categorization. Proceedings of the National Academy of Sciences, 104(15):6424â6429, 2007. [18] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale
image recognition. arXiv preprint arXiv:1409.1556, 2014.
[19] Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015. | 1605.06431#30 | Residual Networks Behave Like Ensembles of Relatively Shallow Networks | In this work we propose a novel interpretation of residual networks showing
that they can be seen as a collection of many paths of differing length.
Moreover, residual networks seem to enable very deep networks by leveraging
only the short paths during training. To support this observation, we rewrite
residual networks as an explicit collection of paths. Unlike traditional
models, paths through residual networks vary in length. Further, a lesion study
reveals that these paths show ensemble-like behavior in the sense that they do
not strongly depend on each other. Finally, and most surprising, most paths are
shorter than one might expect, and only the short paths are needed during
training, as longer paths do not contribute any gradient. For example, most of
the gradient in a residual network with 110 layers comes from paths that are
only 10-34 layers deep. Our results reveal one of the key characteristics that
seem to enable the training of very deep networks: Residual networks avoid the
vanishing gradient problem by introducing short paths which can carry gradient
throughout the extent of very deep networks. | http://arxiv.org/pdf/1605.06431 | Andreas Veit, Michael Wilber, Serge Belongie | cs.CV, cs.AI, cs.LG, cs.NE | NIPS 2016 | null | cs.CV | 20160520 | 20161027 | [
{
"id": "1603.09382"
},
{
"id": "1512.03385"
},
{
"id": "1603.05027"
},
{
"id": "1505.00387"
}
] |
1605.06431 | 31 | [20] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Conference on Computer Vision and Pattern Recognition, pages 1â9, 2015.
[21] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfel- low, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
[22] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems, 2014.
[23] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computer VisionâECCV 2014, pages 818â833. Springer, 2014.
9 | 1605.06431#31 | Residual Networks Behave Like Ensembles of Relatively Shallow Networks | In this work we propose a novel interpretation of residual networks showing
that they can be seen as a collection of many paths of differing length.
Moreover, residual networks seem to enable very deep networks by leveraging
only the short paths during training. To support this observation, we rewrite
residual networks as an explicit collection of paths. Unlike traditional
models, paths through residual networks vary in length. Further, a lesion study
reveals that these paths show ensemble-like behavior in the sense that they do
not strongly depend on each other. Finally, and most surprising, most paths are
shorter than one might expect, and only the short paths are needed during
training, as longer paths do not contribute any gradient. For example, most of
the gradient in a residual network with 110 layers comes from paths that are
only 10-34 layers deep. Our results reveal one of the key characteristics that
seem to enable the training of very deep networks: Residual networks avoid the
vanishing gradient problem by introducing short paths which can carry gradient
throughout the extent of very deep networks. | http://arxiv.org/pdf/1605.06431 | Andreas Veit, Michael Wilber, Serge Belongie | cs.CV, cs.AI, cs.LG, cs.NE | NIPS 2016 | null | cs.CV | 20160520 | 20161027 | [
{
"id": "1603.09382"
},
{
"id": "1512.03385"
},
{
"id": "1603.05027"
},
{
"id": "1505.00387"
}
] |
1605.04711 | 0 | 2 2 0 2
v o N 0 2 ] V C . s c [ 3 v 1 1 7 4 0 . 5 0 6 1 : v i X r a
# TERNARY WEIGHT NETWORKS
Fengfu Li1â , Bin Liu2â , Xiaoxing Wang2, Bo Zhang1â, Junchi Yan2â
1Institute of Applied Math., AMSS, CAS, Beijing, China [email protected], [email protected] 2MOE Key Lab of Artiï¬cial Intelligence, Shanghai Jiao Tong University, Shanghai, China {binliu_sjtu, ï¬gure1_wxx, yanjunchi}@sjtu.edu.cn
# ABSTRACT | 1605.04711#0 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 1 | # ABSTRACT
We present a memory and computation efï¬cient ternary weight networks (TWNs) - with weights constrained to +1, 0 and -1. The Euclidian distance between full (ï¬oat or double) precision weights and the ternary weights along with a scaling factor is minimized in training stage. Besides, a threshold-based ternary function is optimized to get an approximated solution which can be fast and easily computed. TWNs have shown better expressive abilities than binary precision counterparts. Mean- while, TWNs achieve up to 16à model compression rate and need fewer multiplications compared with the ï¬oat32 precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet datasets show that the TWNs achieve much bet- ter result than the Binary-Weight-Networks (BWNs) and the classiï¬cation performance on MNIST and CIFAR-10 is very close to the full precision networks. We also verify our method on object detection task and show that TWNs signiï¬cantly outperforms BWN by more than 10% mAP on PASCAL VOC dataset. The pytorch version of source code is available at: https://github.com/Thinklab-SJTU/twns. | 1605.04711#1 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 2 | are BinaryNet [11] and XNOR-Net [7] where both weights and activations are binary-valued. These models eliminate most of the multiplications in the forward and backward prop- agations, and thus own the potential of gaining signiï¬cant beneï¬ts with specialized deep learning (DL) hardware by re- placing many multiply-accumulate operations by simple ac- cumulation [12]. Besides, binary weight networks achieve up to 32à model compression rate. Despite the binary tech- niques, some other compression methods focus on identifying models with few parameters while preserving accuracy by compressing existing state-of-the-art DNN models in a lossy way. SqueezeNet [13] is such a model that has 50à fewer parameters than AlexNet [2] but maintains AlexNet-level ac- curacy on ImageNet. MobileNet [14] and Shufï¬eNet [15] propose lightweight architectures to reduce the parameters and computation cost. Other methods propose to search efï¬cient ar- chitectures and achieves great performance on both classiï¬ca- tion [16, 17] and object detection [18]. Deep Compression [9] is another most recently | 1605.04711#2 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 3 | and achieves great performance on both classiï¬ca- tion [16, 17] and object detection [18]. Deep Compression [9] is another most recently proposed method that uses pruning, trained quantization and huffman coding for compressing neu- ral networks. It reduced the storage requirement of AlexNet and VGG-16 [3] by 35à and 49Ã, respectively, without loss of accuracy. This paper has the following contributions: | 1605.04711#3 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 4 | # 1. INTRODUCTION AND RELATED WORK
Deep neural networks (DNN) have made signiï¬cant improve- ments in lots of computer vision tasks such as object recog- nition [1, 2, 3, 4] and object detection [5, 6]. This motivates interests to deploy the state-of-the-art DNN models to real world applications like smart phones, wearable embedded de- vices or other edge computing devices. However, these models often need considerable storage and computational power [7], and can easily overburden the limited storage, battery power, and computer capabilities of the smart wearable embedded devices. As a result, it remains a challenge for the deployment. To mitigate the storage and computational problem [8, 9], methods that seek to binarize weights or activations in DNN models have been proposed. BinaryConnect [10] uses a single sign function to binarize the weights. Binary Weight Net- works [7] adopts the same binarization function but adds an extra scaling factor. The extensions of the previous methods
â : Equal contribution. â Correspondence authors.
1) To our best knowledge, this was the ï¬rst (at least at its debut in arxiv) ternary weight quantization scheme to reduce storage and computational cost for deep neural networks. | 1605.04711#4 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 5 | 2) We propose an approximated and universal solution with threshold-based ternary function for calculating the ternary weights of the raw neural networks.
3) Experiments show the efï¬cacy of our approach on public benchmarks for both image classiï¬cation and detection.
# 2. TERNARY WEIGHT NETWORKS
# 2.1. Advantage Overview
We address the limited storage and computational resources issues by introducing ternary weight networks (TWNs), which constrain the weights to be ternary-valued: +1, 0 and -1. TWNs seek to make a balance between the full precision weight net- works (FPWNs) counterparts and the binary precision weight
networks (BPWNs) counterparts. The detailed features are listed as follows.
Expressive ability In most recent network architectures such as VGG [3], GoogLeNet [4] and ResNet [1], a most commonly used convolutional ï¬lter is of size 3Ã3. With binary precision, there is only 23Ã3 = 512 templates. However, a ternary ï¬lter with the same size owns 33Ã3 = 19683 templates, which gains 38à more stronger expressive abilities than the binary counterpart. | 1605.04711#5 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 6 | Model compression In TWNs, 2-bit storage requirement is needed for a unit of weight. Thus, TWNs achieve up to 16à model compression rate compared with the ï¬oat32 precision counterparts. Take VGG-19 [3] as an example, ï¬oat version of the model needs â¼500M storage requirement, which can be reduced to â¼32M with ternary precision. Thus, although the compression rate of TWNs is 2à less than that of BPWNs, it is fair enough for compressing most of the existing state-of- the-art DNN models.
Computational requirement Compared with the BP- WNs, TWNs own an extra zero state. However, the zero terms need not be accumulated for any multiple operations. Thus, the multiply-accumulate operations in TWNs keep unchanged compared with binary precision counterparts. As a result, it is also hardware-friendly for training large-scale networks with specialized DL hardware.
In the following parts, we will give detailed descriptions about the ternary weight networks problem and an approx- imated but efï¬cient solution. After that, a simple training algorithm with error back-propagation is introduced and the run time usage is described at last. | 1605.04711#6 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 7 | 2.2. Problem Formulation To make the ternary weight networks perform well, we seek to minimize the Euclidian distance between the full precision weights W and the ternary-valued weights ËW along with a nonnegative scaling factor α [7]. The optimization problem is formulated as follows,
~ ~ ~ 2 a*,W* =argmin J(a,W) = \|w âaWw ow. 2 (1) W; ⬠{-1,0, 41} ,6=1,2..n st. a >O0,
Here n is the number of the ï¬lter. With the approximation W â α ËW, a basic block of forward propagation in ternary weight networks is as follows,
Z=X*W~ Xx (aW) = (aX) OW 2) Xrert = g(Z) ¢
where X is the input of the block; â is a convolution or in- ner product operation; g is a nonlinear activation function; â indicates a convolution or an inner product operation with- out multiplication; Z is the output feature map of the neural network block. It can also be used as input of the next block. | 1605.04711#7 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 8 | 2.3. Threshold-based Ternary Function One way to solve the optimization Eq. 1 is to expand the cost function J(α, ËW) and take the derivative w.r.t. α and ËWi is respectively. However, this would get interdependent αâ and ËWâ i . Thus, there is no deterministic solution in this way [19]. To overcome this, we try to ï¬nd an approximated optimal solution with a threshold-based ternary function,
ËWi = f (Wi|â) = +1 0 â1 if Wi > â |Wi| ⤠â if if Wi < ââ (3)
Here â is an positive threshold parameter. With Eq. 3, the original problem can be transformed to
αâ, ââ = arg min αâ¥0,â>0 (|Wâ|α2 â 2( iâIâ |Wi|)α + câ) (4)
where I, = {i||W;| > A} and |I,| denotes the number of elements in Iy; cq = Viers: w? is a a independent con- stant. Thus, for any given A, the optimal a can be computed as follows,
1 an == WwW; (5) A= py (Wi A | 1605.04711#8 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 9 | 1 an == WwW; (5) A= py (Wi A
By substituting αâ which can be simpliï¬ed as follows, â into Eq. 4, we get a â dependent equation,
1 A* = arg min â W;)? (6) going fT
The above euqation has no straightforward solutions. Though discrete optimization can be made to solve the prob- lem (due to states of Wi is ï¬nite), it should be very time consuming. As a viable alternative, we make a single as- sumption that Wi are generated from uniform or normal In case of Wi are uniformly distributed in distribution. [âα, α] and â lies in (0, α], the approximated ââ is α 3 , which equals to 2 3 E(|W|). When Wi is generated from normal distributions N (0, Ï2), the approximated ââ is 0.6Ï which equals to 0.75E(|W|). Thus, we can use a rule of thumb that ââ â 0.75E(|W|) â 0.75 n | 1605.04711#9 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 10 | 2.4. Training of Ternary-Weight-Networks CNNs typically includes Convolution layer, Fully-Connected layer, Pooling layer (e.g.,Max-Pooling, Avg-Pooling), Batch- Normalization (BN) layer [20] and Activation layer (e.g.,ReLU, Sigmoid), in TWNs, we also follow the traditional neural network block design philosophy, the order of layers in a typical ternary block of TWNs is shown in Fig. 1.
We borrow the parameter optimization strategy which suc- cessfully applied from BinaryConncet [10] and XNOR-Net [7], in our design, ternarization only happens at the forward and backward pass in convolution and fully-connected layers, but in the parameters update stage, we still keep a copy of the
Algorithm 1: Train a M-layers CNN w/ ternary weights | 1605.04711#10 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 11 | Algorithm 1: Train a M-layers CNN w/ ternary weights
Rwne mw Algorithm a M-layers ternary Inputs : A minibatch of inputs and targets (I, Y), loss function L(Y, Y) and current weight W'. Hyper-parameter : current learning rate 7)â. Outputs updated weight W'+!, updated learning rate nâ+!. Make the float32 weight filters as ternery ones: form = 1to M do for kââ filter in m"â layer do Amk = 2 \|Wra lle Wrmk = {-1,0, +1}, refer to Eq,[3| Wrt?Winks Wink'Wmk Tink = mkWink Amk =
8 ËY = TernaryForward(I, ËW, α) //standard forward
propagation , ËT ) //standard backward = TernaryBackward( âL âL â ËT â ËY propagation except that gradients are computed using T instead of W t
10 W t+1 = UpdateParameters(W t, âL âT , ηt) // we use SGD in this paper
11 ηt+1 = UpdateLearningrate(ηt, t) //we use learning rate step decay in this paper | 1605.04711#11 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 12 | Fig. 1. A typical Ternary block in TWNs. In the forward pass, we apply ternarization operation for the weight of convolution layer meanwhile the ï¬oat32 weight will be cached for future parameter update; in the backward pass, we calculate ternary weight gradient to update the ï¬oat32 weight.
full-precision parameters. In addition, two effective tricks, Batch-Normalization and learning rate step decay that drops the learning rate by a factor every few epochs, are adopted. We use stochastic gradient descent (SGD) with momentum to update the the parameters when training TWNs, the detailed training strategy show in Table 1.
2.5. Inference of Ternary-Weight-Networks In the forward pass, the scaling factor α could be transformed to the inputs according to Eq. 2. Thus, we only need to keep the ternary-valued weights and the scaling factors for deployment. This would results up to 16à model compression rate for deployment compared with the ï¬oat32 precision counterparts.
# 3. EXPERIMENTS AND DISCUSSION | 1605.04711#12 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 13 | # 3. EXPERIMENTS AND DISCUSSION
We benchmark Ternary Weight Networks (TWNs) with Bi- nary Weight Networks (BPWNs) and Full Precision Networks (FPWNs) on both classiï¬cation task (MNIST, CIFAR-10 and ImageNet) and object detection task (PASCAL VOC).
Table 1. Backbones and hyperparameters setting for different datasets used by our method on three benchmarks.
MNIST CIFAR-10 ImageNet backbone architecture weight decay mini-batch size initial learning rate learning rate adjust step2 momentum LeNet-5 1e-4 50 0.01 15, 25 0.9 VGG-7 1e-4 100 0.1 80, 120 0.9 ResNet18B 1e-4 64(x4)1 0.1 30, 40, 50 0.9
For a fair comparison, we keep the following conï¬gures to be same: network architecture, regularization method (L2 weight decay), learning rate scaling procedure (multi-step) and optimization method (SGD with momentum). BPWNs use sign function to binarize the weights and FPWNs use ï¬oat- valued weights. See Table 1 for training conï¬gurations. | 1605.04711#13 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 14 | 3.1. Experiments of Classiï¬cation MNIST is a collection of handwritten digits. It is a very popular dataset in the ï¬eld of image processing. The LeNet- 5 [21] architecture we used in MNIST experiment is â32-C5 + MP2 + 64-C5 + MP2 + 512 FC + SVMâ which starts with a 5x5 convolutional block that includes a convolution layer, a BN layer and a relu layer. Then a max-pooling layer is followed with stride 2. The âFCâ is a fully connect block with 512 nodes. The top layer is a SVM classiï¬er with 10 labels. Finally, hinge loss is minimized with SGD. | 1605.04711#14 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 15 | CIFAR-10 consists of 10 classes with 6K color images of 32x32 resolution for each class. It is divided into 50K training and 10K testing images. We deï¬ne a VGG inspired architecture, denoted as VGG-7, by â2Ã(128-C3) + MP2 + 2Ã(256-C3) + MP2 + 2Ã(512-C3) + MP2 + 1024-FC + Soft- maxâ. Compared with the architecture in [10], we ignore the last fully connected layer. We follow the data augmentation in [1, 22] for training: 4 pixels are padded on each side, and a 32Ã32 crop is randomly sampled from the padded image or its horizontal ï¬ip. At testing time, we only evaluate the single view of the original 32Ã32 image. | 1605.04711#15 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 16 | ImageNet consists of about 1.2 million train images from 1000 categories and 50,000 validation images. ImageNet has higher resolution and greater diversity, is more close to real life than MNIST and CIFAR-10. We adopt the popu- lar ResNet18 architecture [1] as backbone. Besides, we also benchmark another enlarged counterpart whose number of ï¬l- ters in each block is 1.5à of the original one which is termed as ResNet18B. In each training iteration, images are randomly cropped with 224Ã224 size. We do not use any resize tricks [7] or any color augmentation.
Table 2 shows the classiï¬cation results. On the small datasets (MNIST and CIFAR-10), TWNs achieve similar per1We use 4 GPUs to speed up the training. 2Learning rate is divided by 10 at these epochs.
'ââ Full precision (ResNet-18) Full precision (ResNet-18B) 1-2» Temary precision (ResNet-18) | |p-e-* Ternary precision (ResNet-18B) 1: }»â-#â«Binary precision (ResNetâ18) Binary precision (ResNet-18B) 02 + 0 5 10 15 20 25 30 35 40 45 50 55 60 Epochs | 1605.04711#16 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 18 | 0.995 0.99 0.9 0.985 0.98 + -- 0.975 0.97 âAccuracy 0.965 + 'ââ Full precision (ResNet-18) Full precision (ResNet-18B) 1-2» Temary precision (ResNet-18) 0.96 + ~ _ |e Ternary precision (LeNet-5) 0.75 Binary precision (LeNet-5) 0.955 | |p-e-* Ternary precision (ResNet-18B) 1: }»â-#â«Binary precision (ResNetâ18) Binary precision (ResNet-18B) Full precision (VGG7~128) Temary precision (VGG7-128) Binary precision (VGG7-128) 0.95 +# + 15 20 0 2 Epochs (a) MNIST 2 30 35 «40 40 60 80 100 120 140 160 180 Epochs (b) CIFAR-10 02 + 0 5 10 15 20 25 30 35 40 45 50 55 60 Epochs (c) ImageNet (top-5)
(b) CIFAR-10 Fig. 2. Classiï¬cation accuracy over training epochs MNIST (top-1 accuracy), CIFAR10 (top-1) and ImageNet (top-5).
# (a) MNIST | 1605.04711#18 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 19 | # (a) MNIST
# (c) ImageNet (top-5)
Table 2. Classiï¬cation accuracy (%) on ImageNet with ResNet18 (or ResNet18B in bracket) as backbones. MNIST CIFAR-10
ImageNet (top-1) ImageNet (top-5) 99.35 99.05 99.41 98.82 98.60 - - 92.56 90.18 92.88 91.73 89.85 - - 61.80 (65.3) 57.50 (61.6) 65.4 (67.6) - - 60.8 51.2 84.20 (86.2) 81.20 (83.9) 86.76 (88.0) - - 83.0 73.2
TWNs (our main approach) BPWNs (binary precision counterpart) FPWNs (full precision counterpart) BinaryConnect [10] Binarized Neural Networks [11] Binary Weight Networks [7] XNOR-Net [7]
Table 3. Detection performance (%) on PASCAL VOC with YOLOv5 (small) as detector on Pascal VOC. | 1605.04711#19 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 21 | formance as FPWNs, while beats BPWNs. On the large-scale ImageNet dataset, BPWNs and TWNs both get poorer per- formance than FPWNs. However, the accuracy gap between TWNs and FPWNs is smaller than the gap between BPWNs and TWNs. In addition, when we change the backbone from ResNet18 to ResNet18B, as the model size is larger, the perfor- mance gap between TWNs (or BPWNs) and FPWNs has been reduced. This indicates low precision networks gain more merits from larger models than the full precision counterparts. The validation accuracy curves of different approaches across all training epochs on MNIST, CIFAR-10 and Ima- geNet datasets illustrate in Fig. 2. As we can see in the ï¬gure, obviously, BPWNs converge slowly and the training loss is not stable compared with TWNs and FPWNs. However, TWNs converge almost as fast and stably as FPWNs.
3.2. Experiments of Detection PASCAL VOC [23] consists of 20 classes with 11540 images and 27450 labeled objects. We adopt the popular YOLOv5 (small) [24] architecture and compare the performance of full precision, binary precision and ternary precision in Table 3. | 1605.04711#21 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 22 | Speciï¬cally, we initialize each model by the weights trained on MS-COCO dataset [25] (provided by YOLOv5) and ï¬ne- tune each model by 150 epochs. We observe that TWNs signiï¬cantly outperforms BPWNs by more than 10% mAP, showing the great effectiveness of our method.
4. CONCLUSION
In this paper, we have introduced the simple, efï¬cient, and accurate ternary weight networks for real world AI application which can reduce the memory usage about 16x and the the computation about 2x. We present the optimization problem of TWNs and give an approximated solution with a simple but effective ternary function. The proposed TWNs achieve a balance between accuracy and model compression rate as well as potentially low computational requirements of BPWNs. Empirical results on public benchmarks show the superior performance of the proposed method.
5. REFERENCES
[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, âDeep residual learning for image recognition,â arXiv preprint arXiv:1512.03385, 2015. | 1605.04711#22 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 23 | [2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, âImagenet classiï¬cation with deep convolutional neural networks,â Advances in neural information processing systems, p. 1097â1105, 2012.
[3] K. Simonyan and A. Zisserman, âVery deep convolu- tional networks for large-scale image recognition,â arXiv preprint arXiv:1409.1556, 2014.
[4] W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Er- han, V. Vanhoucke, and A. Rabinovich, âGoing deeper with convolutions,â CVPR, p. 1â9, 2015.
[5] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. Reed, âSsd: Single shot multibox detector,â arXiv preprint arXiv:1512.02325, 2015. | 1605.04711#23 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 24 | [6] S. Ren, K. He, R. Girshick, and J. Sun, âFaster r-cnn: Towards real-time object detection with region proposal networks,â Advances in neural information processing systems, p. 91â99, 2015.
[7] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, Imagenet classiï¬cation using binary arXiv preprint âXnor-net: convolutional neural networks,â arXiv:1603.05279, 2016.
[8] Steven K. Esser, Paul A. Merolla, John V. Arthur, An- drew S. Cassidy, Rathinakumar Appuswamy, and et al., âConvolutional networks for fast, energy-efï¬cient neu- romorphic computing,â Proceedings of the National Academy of Sciences, vol. 113, no. 41, pp. 11441â11446, 2016.
[9] Song Han, Huizi Mao, and William J. Dally, âDeep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,â arXiv preprint arXiv 1510.00149, 2015. | 1605.04711#24 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 25 | [10] M. Courbariaux, Y. Bengio, and J.-P. David, âBinarycon- nect: Training deep neural networks with binary weights during propagations,â NeurIPS, p. 3123â3131, 2015.
[11] I. Hubara, D. Soudry, and R. E. Yaniv, âBinarized neural networks,â Advances in neural information processing systems, 2016.
[12] Z. Lin, M. Courbariaux, R. Memisevic, and Y. Ben- gio, âNeural networks with few multiplications,â arXiv preprint arXiv:1510.03009, 2015.
[13] F. N. Iandola, M. W. Moskewicz, K. Ashraf, S. Han, W. J. Dally, and K. Keutzer, âSqueezenet: Alexnet-level accuracy with 50x fewer parameters and <1mb model size,â arXiv preprint arXiv:1602.07360, 2016. | 1605.04711#25 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 26 | [14] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam, âMobilenets: Efï¬cient convolutional neural networks for mobile vision applica- tions,â CoRR, vol. abs/1704.04861, 2017.
[15] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun, âShufï¬enet: An extremely efï¬cient convolutional neural network for mobile devices,â in CVPR, 2018.
[16] Hanxiao Liu, Karen Simonyan, and Yiming Yang, âDARTS: differentiable architecture search,â in ICLR, 2019.
[17] Xiaoxing Wang, Chao Xue, Junchi Yan, Xiaokang Yang, Yonggang Hu, and Kewei Sun, âMergenas: Merge oper- ations into one for differentiable architecture search,â in IJCAI, 2020, pp. 3065â3072. | 1605.04711#26 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 27 | [18] Xiaoxing Wang, Jiale Lin, Juanping Zhao, Xiaokang Yang, and Junchi Yan, âEautodet: Efï¬cient architecture search for object detection,â in ECCV, 2022.
[19] K. Hwang and W. Sung, âFixed-point feedforward deep neural network design using weights +1, 0, and -1,â IEEE Workshop on Signal Processing Systems (SiPS), pp. 1â6, 2014.
[20] S. Ioffe and C. Szegedy, âBatch normalization: Ac- celerating deep network training by reducing internal covariate shift,â Proceedings of The 32nd International Conference on Machine Learning, p. 448â456, 2015.
[21] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, âGradient-based learning applied to document recogni- tion,â Proceedings of the IEEE, vol. 86, no. 11, pp. 2278â2324, 1998.
[22] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu, âDeeply-supervised nets,â Proceedings of the Eighteenth International Conference on Artiï¬cial Intelligence and Statistics, p. 562â570, 2015. | 1605.04711#27 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.04711 | 28 | [23] Mark Everingham, Luc Van Gool, Christopher K. I. Williams, John M. Winn, and Andrew Zisserman, âThe pascal visual object classes (VOC) challenge,â Int. J. Comput. Vis., vol. 88, no. 2, pp. 303â338, 2010.
[24] Glenn Jocher, âYolov5 documentation,â https:// docs.ultralytics.com/, May 2020.
[25] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick, âMicrosoft COCO: common ob- jects in context,â in ECCV, 2014. | 1605.04711#28 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | [
{
"id": "1602.07360"
},
{
"id": "1510.03009"
},
{
"id": "1512.02325"
},
{
"id": "1603.05279"
},
{
"id": "1512.03385"
}
] |
1605.02688 | 1 | Rami Al-Rfou,6 Guillaume Alain,1 Amjad Almahairi,1 Christof Angermueller,7, 8 Dzmitry Bahdanau,1 Nicolas Ballas,1 Fr´ed´eric Bastien,1 Justin Bayer, Anatoly Belikov,9 Alexander Belopolsky,10 Yoshua Bengio,1, 3 Arnaud Bergeron,1 James Bergstra,1 Valentin Bisson,1 Josh Bleecher Snyder, Nicolas Bouchard,1 Nicolas Boulanger-Lewandowski,1 Xavier Bouthillier,1 Alexandre de Br´ebisson,1 Olivier Breuleux,1 Pierre-Luc Carrier,1 Kyunghyun Cho,1, 11 Jan Chorowski,1, 12 Paul Christiano,13 Tim Cooijmans,1, 14 Marc-Alexandre CËot´e,15 Myriam CËot´e,1 Aaron Courville,1, 4 Yann N. Dauphin,1, 16 Olivier Delalleau,1 Julien Demouth,17 Guillaume Desjardins,1, 18 Sander Dieleman,19 Laurent Dinh,1 M´elanie Ducoffe,1, 20 Vincent Dumoulin,1 Samira | 1605.02688#1 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 2 | 18 Sander Dieleman,19 Laurent Dinh,1 M´elanie Ducoffe,1, 20 Vincent Dumoulin,1 Samira Ebrahimi Kahou,1, 2 Dumitru Erhan,1, 21 Ziye Fan,22 Orhan Firat,1, 23 Mathieu Germain,1 Xavier Glorot,1, 18 Ian Goodfellow,1, 24 Matt Graham,25 Caglar Gulcehre,1 Philippe Hamel,1 Iban Harlouchet,1 Jean-Philippe Heng,1, 26 Bal´azs Hidasi,27 Sina Honari,1 Arjun Jain,28 S´ebastien Jean,1, 11 Kai Jia,29 Mikhail Korobov,30 Vivek Kulkarni,6 Alex Lamb,1 Pascal Lamblin,1 Eric Larsen,1, 31 C´esar Laurent,1 Sean Lee,17 Simon Lefrancois,1 Simon Lemieux,1 Nicholas L´eonard,1 Zhouhan Lin,1 Jesse A. Livezey,32 Cory Lorenz,33 Jeremiah Lowin, Qianli Ma,34 Pierre-Antoine Manzagol,1 Olivier Mastropietro,1 Robert | 1605.02688#2 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 3 | Cory Lorenz,33 Jeremiah Lowin, Qianli Ma,34 Pierre-Antoine Manzagol,1 Olivier Mastropietro,1 Robert T. McGibbon,35 Roland Memisevic,1, 4 Bart van Merri¨enboer,1 Vincent Michalski,1 Mehdi Mirza,1 Alberto Orlandi, Christopher Pal,1, 2 Razvan Pascanu,1, 18 Mohammad Pezeshki,1 Colin Raffel,36 Daniel Renshaw,25 Matthew Rocklin, Adriana Romero,1 Markus Roth, Peter Sadowski,37 John Salvatier,38 Franc¸ois Savard,1 Jan Schl¨uter,39 John Schulman,24 Gabriel Schwartz,40 Iulian Vlad Serban,1 Dmitriy Serdyuk,1 Samira Shabanian,1 ´Etienne Simon,1, 41 Sigurd Spieckermann, S. Ramana Subramanyam,42 Jakub Sygnowski,43 J´er´emie Tanguay,1 Gijs van Tulder,44 Joseph Turian,1 Sebastian Urban,45 Pascal Vincent,1, 5 Francesco Visin,1, 46 Harm de Vries,1 David | 1605.02688#3 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 4 | Joseph Turian,1 Sebastian Urban,45 Pascal Vincent,1, 5 Francesco Visin,1, 46 Harm de Vries,1 David Warde-Farley,1 Dustin J. Webb,1, 47 Matthew Willson,48 Kelvin Xu,1 Lijun Xue,49 Li Yao,1 Saizheng Zhang,1 and Ying Zhang1 1Montreal Institute for Learning Algorithms (MILA), Universit´e de Montr´eal, QC, Canada 2 ´Ecole Polytechnique de Montr´eal, QC, Canada 3CIFAR Senior Fellow 4CIFAR Fellow 5CIFAR Associate Fellow 6Stony Brook University, NY, USA 7University of Cambridge, UK 8European Bioinformatics Institute, European Molecular Biology Laboratory, Cambridge, UK 9Bauman Moscow State Technical University, Russia 10Enlightenment Research LLC, New York, NY, USA 11New York University, New York, NY, USA 12University of Wroclaw, Poland 13University of California, Berkeley, CA, USA 14Maastricht University, Netherlands 15Universit´e de Sherbrooke, QC, Canada 16Facebook AI Research 17NVIDIA Corporation 18Google DeepMind 19Ghent University, Belgium 20 | 1605.02688#4 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 5 | Netherlands 15Universit´e de Sherbrooke, QC, Canada 16Facebook AI Research 17NVIDIA Corporation 18Google DeepMind 19Ghent University, Belgium 20 ´Equipe MIND, Sparks, laboratoire I3S, Universit´e de Nice, France 21Google 22Speech and Hearing Research Center, Peking University, Beijing, China 23Middle East Technical University, Ankara, Turkey 24OpenAI 25University of Edinburgh, UK 26Meiji University, Tokyo, Japan 27Gravity R&D 28Indian Institute of Technology, Bombay, India 29Megvii Technology Inc. 30ScrapingHub Inc. 31CIRRELT and D´epartement dâinformatique et recherche op´erationnelle, Universit´e de Montr´eal, QC, Canada 32Redwood Center for Theoretical Neuroscience, Department of Physics, University of California, Berkeley, CA, USA 33PlanGrid, San Francisco, CA, USA 34Northeastern University, Boston, MA, USA 35Department of Chemistry, Stanford University, CA, USA 36Columbia University, New York, NY, USA 37University of California, Irvine, CA, USA | 1605.02688#5 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 6 | 38AI Impacts 39Austrian Research Institute for Artiï¬cial Intelligence, Vienna, Austria 40Department of Computer Science, Drexel University, PA, USA 41 ´Ecole Normale Sup´erieure de Cachan, France 42Birla Institute of Technology and Science, Pilani, India 43University of Warsaw, Poland 44Biomedical Imaging Group, Erasmus MC, Rotterdam, Netherlands 45Institut f¨ur Informatik VI, Technical University of Munich, Garching, Germany 46Politecnico di Milano, Milan, Italy 47School of Computing, University of Utah, Salt Lake City, UT, USA 48Swiftkey 49Carnegie Mellon University West, Moffett Field, CA, USA
Theano is a Python library that allows to deï¬ne, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efï¬ciently. Since its introduction in [1] it has been one of the most used CPU and GPU mathematical compilers â especially in the machine learning community [2] â and has shown steady performance improvements [3]. Theano is being actively and continuously developed since 2008, multiple frameworks have been built on top of it and it has been used to produce many state-of-the-art machine learning models. | 1605.02688#6 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 7 | The present article is structured as follows. Section I provides an overview of the Theano software and its community. Section II presents the principal features of Theano and how to use them, and compares them with other similar projects. Section III focuses on recently-introduced functionalities and improvements. Section IV compares the performance of Theano against Torch7 [4] and TensorFlow [5] on several machine learning mod- els. Section V discusses current limitations of Theano and potential ways of improving it.
â [email protected]; http://deeplearning.net/software/theano; code available at https://github.com/Theano
2
# I. OVERVIEW
# A. Vision | 1605.02688#7 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 8 | 2
# I. OVERVIEW
# A. Vision
Theano allows a user to symbolically deï¬ne mathematical expressions and have them compiled in a highly optimized fashion either on CPUs or GPUs (the latter using CUDA)1, just by modifying a conï¬guration ï¬ag. Furthermore, Theano can automat- ically compute symbolic differentiation of complex expressions, ignore the variables that are not required to compute the ï¬nal output, reuse partial results to avoid redundant computations, apply mathematical simpliï¬cations, compute operations in place when possible to minimize the memory usage, and apply numerical stability optimization to overcome or minimize the error due to hardware approximations. To achieve this, the mathematical expressions deï¬ned by the user are stored as a graph of variables and operations, that is pruned and optimized at compilation time.
The interface to Theano is Python, a powerful and ï¬exible language that allows for rapid prototyping and provides a fast and easy way to interact with the data. The downside of Python is its interpreter, that is in many cases a poor engine for executing mathematical calculations both in terms of memory usage and speed. Theano overcomes this limitation, by exploiting the compactness and ductility of the Python language and combining them with a fast and optimized computation engine. | 1605.02688#8 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 9 | Theanoâs API mimics NumPy [6, 7], a widely adopted Python library that provides an n-dimensional array data type and many functions for indexing, reshaping, and performing elementary computations (exp, log, sin, etc.) on entire arrays at once. This allows Python users to rapidly switch to Theano using a familiar syntax and set of instructions â extended with advanced features, such as automatic gradient computation, numerical stability improvements and optimization â and generate a high-performance code for CPU as well as for GPU, without requiring changes to the user code. Theano has also been designed for easy and fast extensibility through the deï¬nition of custom graph expressions written in Python, C++, or CUDA.
# B. Community
Theano is a free, open-source software, licensed under the New (3-clause) BSD license. It relies on a wide and very active community of developers and users worldwide.
The main communication channels with the developers are the projectâs GitHub page2 for bug reports, feature requests, and pull requests, and the theano-dev mailing list,3 which has 675 subscribers. Support for users is provided by the community at theano-users4 (more than 3000 members) and on StackOverï¬ow5 (more than 1000 questions asked). PyPI6 counted 38k downloads of Theano packages during the last month. | 1605.02688#9 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 10 | Since the project development migrated to GitHub in 2011, Theano has been forked 1280 times. Around 250 developers have actively contributed to the code base, and numerous others have played a role in the community, asking, answering or curat- ing questions, helping discussing the development needs, and writing documentation, tutorials,7 or even full-ï¬edged software projects based on Theano.
# C. Software based on Theano
Several software packages have been developed to build on the strengths of Theano, with a higher-level user interface, more suitable for certain goals. For instance, machine learning and deep learning packages, such as Pylearn2 [8], Blocks [9], Lasagne [10], and Keras [11], have been developed with the goal of making it easier to express the architecture of deep learning models, and training algorithms, as mathematical expressions to be evaluated by Theano.
Another example is PyMC3 [12], a probabilistic programming framework that uses Theano to derive expressions for gradients automatically, and to generate C code for fast execution.
# II. MAIN FEATURES
Theano deï¬nes a language to represent mathematical expressions and manipulate them (Section II A), a compiler to create functions that can compute values for these expressions (Section II B), and a library which will execute these functions when | 1605.02688#10 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 11 | 1 Some OpenCL support is available in the new GPU back-end, but it is still limited and experimental. 2 https://github.com/Theano/Theano/ 3 https://groups.google.com/group/theano-dev/ 4 https://groups.google.com/group/theano-users/ 5 http://stackoverï¬ow.com/questions/tagged/theano 6 https://pypi.python.org/pypi 7 For instance, the deep learning tutorials at http://deeplearning.net/tutorial/
3
evaluated on numeric values (Section II C). We also explain how Theano can be extended (Section II D). Finally, we provide some comparison points with related software (Section II E).
# A. Mathematical expressions
# 1. Graph structure
Theano represents symbolic mathematical expressions as directed, acyclic graphs. These graphs are also bipartite, containing two kinds of nodes:
Variable nodes (or variables), which represent data, usually tensors; ⢠Apply nodes, which represent the application of mathematical operations. | 1605.02688#11 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 12 | Variable nodes (or variables), which represent data, usually tensors; ⢠Apply nodes, which represent the application of mathematical operations.
In practice, variables are used for graph inputs and outputs, as well as for intermediate values. During the execution phase, values will be provided for input variables, and computed for intermediate and output ones. An Apply node has inputs and outputs, which are Variable nodes; it represents the application of a mathematical operation (or Op) on its input variables. A Variable node can be the input to several Apply nodes, but can be the output of at most one (graph inputs are not the result of any computation). This corresponds to the single static assignment (SSA) form in compiler design, in that a variable is the result of only one assignation.
This structure is similar to dataï¬ow graphs [13], where Apply nodes would correspond to operations nodes (the only kind of nodes), and Variable nodes would correspond to arcs in the dataï¬ow graph. The main difference is that a single intermediate Variable node can be an input to several Apply nodes, whereas a dataï¬ow graph would require different arcs, one for each of the next operations.
Variables are strongly typed, they enforce some conditions on the values that can be associated with them. These types are known since the construction of the graph. The main categories of types are: | 1605.02688#12 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 13 | Variables are strongly typed, they enforce some conditions on the values that can be associated with them. These types are known since the construction of the graph. The main categories of types are:
⢠TensorType, which represents n-dimensional arrays in the main memory, the values associated with variables of that type are NumPy ndarray objects;
⢠CudaNdarrayType, which represents n-dimensional arrays in GPU memory, associated with CudaNdarray objects, used in the legacy GPU back-end;
GpuArrayType, associated with GpuArray objects, its equivalent in the new GPU back-end; ⢠Sparse, for main-memory sparse matrices, represented by SciPy CSC or CSR matrices.
The number of dimensions and the data type (ï¬oat32, int64, etc.) are part of the type, as well as what we call the broadcastable pattern, which indicates which dimensions are guaranteed to have a shape of 1. Otherwise, the shape is not part of the type, and neither is the memory layout (strides).
# 2. Building a graph | 1605.02688#13 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 14 | # 2. Building a graph
A computation graph is usually constructed by creating free symbolic variables ï¬rst, corresponding to the inputs of the graph. Since variables are strongly typed in Theano, the type of these variables has to be speciï¬ed at creation time. By calling Python functions on variables, the user can then interact with them in a direct and natural way. This is reï¬ected under the hood by the creation of Apply nodes and new Variable nodes that extend the graph. The tensor module exposes many of the functions provided by NumPy for tensor operations, to present a familiar interface to users. Some of these add a single Apply node and its output to the graph, returning the output Variable node, while other build more complex graphs with Apply nodes corresponding to different Ops, combined in such a way that the returned variable represents the expected result.
It is also possible to clone an existing graph, or a part of it. In that case, what was an intermediate variable in the original graph could become a free input, or an output, of the cloned graph. It is also possible to clone with replacements, which make it possible to plug together different disconnected graphs, making inputs into intermediate Variable nodes.
# 3. Symbolic differentiation | 1605.02688#14 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 15 | # 3. Symbolic differentiation
A useful way of deriving gradients is by applying the chain rule backwards through the graph, from a scalar cost towards the inputs (or parameters). This procedure is known as gradient back-propagation, or as the backward or reverse mode of differentiation. For instance, if we have three functions f : RM â R, g : RN â RM , and C : RN â R so that C(x) = f (g(x)), then:
ac Ox og _ of 2 oY x g(x)
4
# Fa
Instead of computing (and storing in memory) explicitly the whole M Ã N Jacobian matrix, âg âx , all we need is a function
Vor > RV SRY viv: oa that computes the vector-Jacobian dot product for any vector v. This can be generalized easily to functions with several inputs, which can be multi-dimensional arrays. | 1605.02688#15 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 16 | Most of Theano Ops implement a grad method that, given symbolic variables for x and v, will return a symbolic expression of âgx(v), where g is the function represented by that Op. theano.grad traverses the graph following the usual back-propagation algorithm, calling the grad method on each Apply nodeâs Op, passing that nodeâs input as x and the gradient coming from the subsequent operations as v. This builds a symbolic expression for the gradient of the cost with respect to variables. These gradients are symbolic variables that are part of the graph as well, so it is possible to use them as parts of other symbolic expressions (to express a learning rule, for instance), and even to traverse the graph again to obtain higher-order derivatives.
Many Theano Ops also implement an R_op method, computing a symbolic expression for the the Jacobian-vector dot prod- uct, Rgz : RY >RY un oo -v. This is the R-operator introduced by [14], and corresponds to the forward mode of Fa differentiation. theano.Rop traverses the graph from inputs to outputs, calling the R-op method on each Apply nodeâs Op.
# 4. Scan: Symbolic loops | 1605.02688#16 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 17 | # 4. Scan: Symbolic loops
Since the computation graph is acyclic, and its structure is ï¬xed and independent from the actual data, it can be a challenge to express loops symbolically. One option, when the number of steps in the loop is ï¬xed, is to explicitly unroll the loop, adding to the computation graph the computation of each of the iterations multiple times. Unfortunately, this makes it impossible to iterate over a sequence of unknown length, or to iterate a variable number of times depending on the value of the data.
To sidestep these issues, Theano implements a special Op called Scan, which abstracts the entire loop in a single Apply node in the graph. That single node contains a full computation graph, isolated from the main one, that represents the computation done during each iteration of the loop. The scan node handles the communication between the external or outer computation graph it belongs to, and the internal or inner graph. It is also responsible to manage the bookkeeping between the different iterations.
The gradient of a Scan operation is implemented as another Scan operation, which iterates over reversed sequences, computing the same gradient as if the loop had been unrolled, implementing what is known as back-propagation through time. Similarly, the R operator is also a Scan operation that goes through the loop in the same order as the original Scan.
# B. The compilation phase | 1605.02688#17 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 18 | # B. The compilation phase
The compilation phase produces a Theano function (a Python callable object) able to compute values for speciï¬ed output symbolic variables, given values for input variables. The set of input and output variables have to be provided when compiling the function, but the inputs do not have to be inputs to the full computation graph, and outputs do not have to be ultimate outputs either. It is possible to compile a function going from some intermediate variables of the graph to other intermediate variables, as long as the set of inputs contains all the information to compute the set of outputs. Several Theano functions can be compiled, computing different parts of the same computation graph.
During the compilation of a Theano function, ï¬rst the relevant portion of the computation graph is cloned, then it gets rewritten by the application of graph optimizations, next some optimized C++ or CUDA code gets generated and compiled if necessary, and ï¬nally a callable object is built and returned to the user.
# 1. Graph optimizations | 1605.02688#18 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 19 | # 1. Graph optimizations
The computation graph structure makes it possible to replace parts of the graph. For instance, a Variable node which is the output of one particular Apply node could be replaced by the output of a different Apply node, as long as they have the same type. Optimizations specify how to perform replacements of variables by other variables representing an equivalent computation. Some of them are local, which means they only look at one Apply node and can replace its outputs, some of them are global, and can examine the whole computation graph and perform arbitrary substitutions. Optimizations are mostly organized into the stages described below, even if there is some overlap.
⢠Canonicalize: Put the graph in a canonical form, to ease the task of subsequent optimizations (for instance, x â x â x2). It performs some simpliï¬cations as well, like removing duplicate computations, removing some unnecessary computations (xy/y â x), and computing the value of expressions if all their inputs are known (constant-folding, 2 + 2 â 4).
⢠Stabilize: Increase numerical stability, for instance log (1 + x) â log1p(x), where log1p is a stable implementation for small x.
5
⢠Specialize: Insert faster implementations of operations. For instance, successive element-wise operations are fused to- gether to avoid having to loop over a tensor several times. | 1605.02688#19 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 20 | 5
⢠Specialize: Insert faster implementations of operations. For instance, successive element-wise operations are fused to- gether to avoid having to loop over a tensor several times.
⢠GPU: Replace the default version of Ops and variables by GPU-speciï¬c versions, using either the old or new back-end, if a GPU is requested. Transfer Ops (CPU-to-GPU or GPU-to-CPU) are inserted so that the type of inputs and outputs is preserved, and around CPU-only operations.
⢠Inplace: Replace the default version of Ops by a version that can work in-place, as a view or destructive operation over its inputs. The array types used by Theano, like ndarray, support arbitrarily-strided arrays, so all transposition operations, as well as basic slicing, can happen in place, in constant time. Some operations, like most element-wise ones, can overwrite their input and return it, to avoid allocating memory. Since destructive operations introduce additional dependencies between Apply nodes (a value can only be overwritten by the last operation to read it), dependency cycles have to be detected and prevented.
⢠Scan: Optimize performance and memory use of Scan nodes. For instance, only keep the value for the last step of an output in memory if the whole sequence is not needed, merge different Scan nodes to perform computations only once, and move invariants out of the loop. | 1605.02688#20 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 21 | While individual optimizations or groups of optimizations can be individually enabled or disabled, some optimizers (sets of optimizations) are predeï¬ned: âNoneâ does not include any optimization, âfast_compileâ includes only canonicalization and transfer to the GPU, and âfast_runâ (the default) includes most optimizations except for experimental and âunsafeâ ones (removing assertions).
# 2. Shared variables
Shared variables are symbolic variables that are associated with persistent values, that are shared between Theano functions. They can only be input variables (not intermediate ones), since their value is not the result of the computation of an Apply node. Shared variables are implicit inputs to all the Theano functions using them.
When compiling a Theano function, it is possible to specify update expressions for shared variables. These expressions are symbolic variables that represent the new value to assign the the shared variables at the end of each function execution. They are implicit outputs of the function, and will be computed along with the other outputs, before the value gets updated. Such update rules make it possible to update the array in-place in some cases, rather than returning a different array. | 1605.02688#21 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 22 | It is also possible to explicitly assign a new value to an existing shared variable, outside of a Theano function, as long as it is compatible with its type. Since the shape is not part of the type, it is possible for the shape of a shared variable to change. If a GPU is enabled, shared variables will be created on the GPU by default, to avoid transfers (this only works for float32 arrays in the old back-end).
# 3. C code compilation and caching
The code to compute output values given input values for each Op can be implemented either in Python or in C++ (or CUDA for GPU Ops), using the C API from Python and NumPy (and from CudaNdarray or GpuArray for GPU). After the function graph is optimized, each Op generates the C++ or CUDA code for a Python module implementing that computation (including reading and writing from the right storage map), which is then compiled, and imported.
A persistent cache on disk makes it possible to avoid generating code twice for the same Op, and to avoid compiling again when different Ops generate the same code (this can happen for the same operation applied on different data types, or different numbers of dimensions, for instance).
# C. Function execution | 1605.02688#22 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 23 | # C. Function execution
Theano includes a runtime engine that, upon a Theano function call, determines the computation to be executed on which data and in what order, and orchestrate their evaluation. This was originally done by forward-traversing graphs from input to output, requiring all branches to be evaluated before outputs could be returned. The default runtime now uses a virtual machine (VM) system. By running small code units (each corresponding to an Apply node for one Op) and ignoring branches not necessary for correct computations, lazy evaluation is now possible.
The runtime uses a data structure containing pointers to storage for each variable (inputs and outputs of each Apply node), ordering constraints, pointers to the functions performing the computations, and information on what has been computed and needs to be computed in the current call. If the speed of execution is more important than memory usage, it is possible to keep references to ndarrays containing intermediate results, to prevent Pythonâs garbage collection from freeing them, and to re-use
6
it for the next run of the function, through the conï¬guration ï¬ag allow_gc=False. The default is to allow the garbage collector to free the storage of intermediate values. | 1605.02688#23 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 24 | The C implementation of that VM (CVM) is the default runtime. Not only does this increase performance by running the runtime loop in C, if a C implementation of an Op is available, the CVM can directly execute it. This eliminates the overhead from a Python function call, which is especially advantageous when performing many operations on small operands.
A Python implementation is also available. It is more ï¬exible and easier to instrument, which is useful to collect more proï¬ling information (for instance, memory usage) and add callbacks for debugging.
# D. Extending Theano
If the existing Theano library does not include the operations required for a particular model, the framework was designed for easy extensibility. New Ops can be written by specifying the type of their input and output variables, and providing Python code to perform the evaluation. That Python code can use bindings to external high-performance libraries, or Cython, for instance. Methods can also be added to specify expressions for gradients and the R-operator (see Section II A 3), and shape inference. Theanoâs self-testing functions can be used to validate outputs and check symbolic gradients against numeric evaluations among others. | 1605.02688#24 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 25 | As mentioned above, operators can also be implemented directly in C++ or CUDA. The raw code can be supplied as a string that the Python code uses to produce the code used by the graph compiler. For added convenience, Theano can now load code from an external C-like ï¬le with the COp class. The ï¬le is divided into sections that map to the different pieces of code that Theano requires. Keeping the Python and C code separate allows more readable code with better indentation. It also enables a clearer view of the C code itself since you can use your favorite C editor to modify that ï¬le with syntax highlighting.
A user can then write a new optimization to automatically insert that optimized operation in the computation graph, instead of the more na¨ıve or slow version. This is especially useful when implementing an operation on GPU.
# E. Related software
Although Theano is developed and mainly used for research in machine learning and deep learning, it is not a deep learning framework in itself (see Section I C for some machine learning frameworks based on Theano). However, it makes sense to compare the core features of such systems with Theano, as they all support the deï¬nition of a mathematical model in a symbolic way, and implement some automatic gradient computation. | 1605.02688#25 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 26 | TensorFlow [5] has a core in C++ and includes most of the features from Theano, in particular the graph-compiling approach, and symbolic differentiation (on full layers as well as on elementary operations), all directly accessible from Python through the API. In addition, it has a focus on distributed, multi-node computation. Even though a graph-rewriting engine is present (and used to distribute computation across devices, for instance) it does not seem to be used for mathematical expressions simpliï¬cation or kernel fusion at the moment. | 1605.02688#26 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 27 | Torch7 [4] has a different approach: it implements efï¬cient CPU and GPU computation kernels in C and makes them avail- able in Lua, but does not provide gradient expressions for elementary operations. Instead, packages like ânnâ and âcunnâ fea- ture higher-level layers that can store parameters and provide methods to compute values for forward propagation, gradient back-propagation, and parameter updates. Many packages extend Torchâs features, in particular Autograd8 provides automatic differentiation of code written in Torch, by building a graph that records the evaluation of expressions (even through loops and conditionals), and playing those records back to build an expression graph for gradients. That graph is symbolic as well, making it possible to express higher-order gradients. Moreover, an optimizer can rewrite the graph to make it more efï¬cient to evaluate. MXNet [15] and Caffe [16], both written in C++, feature the same kind of higher-level layers as Torch. MXNet can also express the gradients through those layers as symbolic layers themselves, giving more ï¬exibility for the dispatching of the computation to different devices, and for | 1605.02688#27 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 28 | express the gradients through those layers as symbolic layers themselves, giving more ï¬exibility for the dispatching of the computation to different devices, and for memory reuse. It also allows distributed computation over multiple nodes. Caffe29 is an experimental rewrite of Caffe that features explicit symbolic gradients in the computation graph, rather than a âbackwardâ method of the layers. | 1605.02688#28 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 29 | Neon10 and Chainer [17] are two other machine learning frameworks written in Python, with GPU kernels, that feature symbolic computation graphs and symbolic differentiation. Neonâs most prominent feature is its collection of highly-optimized GPU kernels, in particular for operations used in neural networks. Chainer instead builds its computation graph dynamically at the same time as its ï¬rst evaluation, making it easier to express loops and conditionals.
# 8 https://github.com/twitter/torch-autograd/ 9 https://github.com/Yangqing/caffe2 10 http://neon.nervanasys.com/
7
# III. NEW FEATURES
Over the last couple of years, multiple improvements have been made in Theano, in particular for faster execution, including support for more operations on the GPU and multiple-GPU support (Section III A), faster graph optimization, especially for larger graphs (Section III B), and ease of use, with better error messages and tools for introspection, visualization, and debugging (Section III C).
# Increased performance
# 1. Abstract Ops and 2D convolutions
Convolution operations are at the core of Convolutional Neural Networks (CNNs) that have lead to spectacular advances in machine learning problem involving visual data [18]. A more detailed description of the convolution operations can be found in [19]. | 1605.02688#29 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 30 | The multiplication of available implementations for convolution (CPU-GEMM, GPU-cuDNN, GPU-GEMM, FFT, . . . ) avail- able in Theano has increased the need of a ï¬exible convolution interface that easily allows to switch between those implementa- tions, each implementation having different speed and memory trade-off, as well as different software dependencies. To suit this need, Theano 0.8 introduces abstract Ops that allows to disentangle the interface of an Op to their actual implementation. An abstract Op introduces is a place-holder Apply node in the graph, corresponding to a given operation, that does not provide an actual implementation. For each optimized implementation of that operation, there is an optimization that will insert an Apply node for that optimized Op instead of the abstract Apply node during the compilation phase. | 1605.02688#30 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 31 | In particular, Theano proposes three abstract Ops for convolution: AbstractConv2d, AbstractConv2d_gradInputs, and AbstractConv2d_gradWeights, that correspond respectively to the forward convolution, the convolution gradient w.r.t. inputs and the convolution gradient w.r.t. weights. Each abstract Op can be replaced by one of the different implementations. By default, if a GPU is enabled and cuDNN is available, Theano will use it (see Section III A 2), otherwise it will fall back to using the GEMM version. A slow, Python-only implementation is part of the abstract Ops for debugging purposes. The optimizations can be included or excluded using the conï¬guration ï¬ags, which makes it possible to manually select a speciï¬c convolution implementation.
# 2. Using cuDNN | 1605.02688#31 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 32 | # 2. Using cuDNN
Efï¬cient CUDA primitives for neural networks are implemented in the cuDNN library [20], in particular convolutions, pool- ing, and their gradients. Several implementation of convolutions (and gradients) are provided, with the same interface, with performance and memory usage that depends on the actual shape of the data and ï¬lters. Since the best implementation can be different for different convolutions in the same model (depending on their size) and on different hardware (depending on the available memory), cuDNN also provides a heuristic to guess the best algorithm given shapes, and to actually time the different implementations (that are feasible given the available free memory) and select the fastest one. | 1605.02688#32 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 33 | Theano wraps cuDNN 2D and 3D convolutions and their gradients, and provide options to select the algorithm to use, ei- ther explicitly or using one of the following special values: âguess_onceâ, âguess_on_shape_changeâ, âtime_onceâ, or âtime_on_shape_changeâ. This selection can be done individually for each Apply node in the graph, and conï¬guration ï¬ags select the global default for the forward convolution, the gradient w.r.t. the data, and the gradient w.r.t. the weights. Theano also wraps pooling operations, as well as softmax and log-softmax operations. More operations will be added in the future.
# 3. CNMeM integration
Another improvement to the GPU performance comes integrating the CNMeM library,11 and using the allocator and deallo- cator it provides. The main issue was that calling cudaFree is synchronous, so it forces the synchronization of all the streams on the device, waiting for them to ï¬nish, which seriously limited the potential for parallel execution of different kernels. A previous option was to keep memory allocated for intermediate values between calls, as mentioned in Section II C, but the amount of memory typically available on GPU devices is limited. | 1605.02688#33 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 34 | 11 The original code is available at https://github.com/NVIDIA/cnmem, Theano includes a copy of it.
8
CNMeM works by allocating large memory pools using cudaMalloc, returning chunks of it when its allocator is called, and keeping track of which ones are released by its deallocator. Theano makes it possible to reserve part of the GPU memory from the start, using lib.cnmem=0.9 to reserve 90% of the memory for CNMeM. The new GPU back-end does not use CNMeM, but implements a similar strategy, with asynchronous allocator and deallocator and a memory pool.
# Improvements in Scan
Important speed improvements have been made to Scan, in addition to making it more stable, and supporting more cases. The time to optimize and compile graphs containing Scan Apply nodes has been reduced a lot, and the execution time of the resulting function has improved as well.
The optimizations related to Scan (pushing computation out of the loop, removing useless computation) have been improved so they can be applied faster. Additional optimizations have been added, so that more computation can be moved out of the loop, for increased execution speed. | 1605.02688#34 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 35 | The execution back-end of Scan has been made more efï¬cient as well, by removing some of the bookkeeping overhead, and making the internal function write directly into the right output buffer at each execution step, rather than having to copy the intermediate results each time.
The grad method of Scan has been rewritten to scale better in the case of large numbers of input and output variables, and to generate a cleaner graph. That cleaner graph can lead to a faster optimization time, since less rewriting is needed and the inner graph is smaller, and faster execution as well. In the case of nested symbolic loops, the observed speed up in compilation time was sometimes huge, going from hours to minutes.
Finally, an additional keyword, strict, has been added to the scan function. It prevents shared variables from being implicitly added as non-sequence inputs to the inner function. This forces the user to explicitly provide all non-sequences needed in the inner function, which may not be the shared variables themselves, but rather outputs of some computation done of them. In that case, doing so prevents pulling that computation inside the loop, which can speed up the optimization as well as the execution.
# 5. New gpuarray-based back-end | 1605.02688#35 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 36 | # 5. New gpuarray-based back-end
Theano now features a new GPU backend based on libgpuarray [21]. This new back-end brings in several improvements over the previous one. The most visible improvement is that it supports all the usual data types, instead of being limited to ï¬oat32 data. In particular, it supports half-precision ï¬oating point values (ï¬oat16). As did the previous back-end, this one supports views and strides to avoid copies and reuse memory whenever possible.
libgpuarray12 is a separate project with the aim of providing a ndarray-like object on the GPU. It has a C interface so that it can be reused in other projects that donât use Python. It also supports 64-bit indexing, so that arrays with more than 232 elements are supported.
Another noticeable improvement is that we have basic support for OpenCL, however a sizable portion of the GPU Ops in Theano do not currently support it. This could be ï¬xed with some porting effort.
The new back-end also allows using multiple GPUs in the same function to do model parallelism. One example of such a model is the two-stack variant of AlexNet [18]. This however may be hampered by the Python Global Interpreter Lock (GIL) in some cases, meaning that one will get correct results, but may lose parallelism. | 1605.02688#36 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 37 | Several new features that help performance are present, but not obvious. One of these is that all computations are transparently asynchronous, which allows the CPU part of the Ops to execute in parallel with the GPU part. There is a mechanism keeping track of the dependencies between operations to ensure that the right data is always used. Data transfers are automatically done on a separate stream, so they can overlap with the computation.
The new back-end is now fully functional, and well tested for correctness. It supports almost all the operations of the old back-end on CUDA-capable devices, including wrapping cuDNN for efï¬cient convolutions, but we are still in the process of tuning some of its kernels for a better performance. In particular, int64-based indexing can be signiï¬cantly slower than int32, so some adjustments have to be made.
# 6. Data parallelism with Platoon
To take advantage of multiple computing devices, there are two main approaches: model parallelism and data parallelism. Model parallelism consists in splitting the model itself into multiple parts and have those parts computed by different devices.
12 http://deeplearning.net/software/libgpuarray/, code available at https://github.com/Theano/libgpuarray
9 | 1605.02688#37 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 38 | 12 http://deeplearning.net/software/libgpuarray/, code available at https://github.com/Theano/libgpuarray
9
It requires a careful balancing of the size of the parts and of the communication costs to ensure optimal performance. Data parallelism on the other hand is about splitting your input data in multiple parts, and running multiple copies of the model. It requires attention to model synchronization so that the copies donât drift apart too much during training, and to the way of aggregating the results produced.
Usually, data parallelism on a single machine is done using multiple threads, but this approach is unworkable in Python because of the Python GIL. Because of this, we have to turn to multiple processes and this presents a new set of challenges. Platoon13 is a package that has been developed to to address those challenges and help train Theano models faster by using data parallelism.
Platoon features a central controller process, that communicates with different worker processes, each using Theano to train a copy of the model on a CPU or GPU. It uses shared memory to share model parameters between workers, in order to avoid inter-process communication overhead. The communications with the central controller are sent asynchronously, so that the worker does not have to wait for a reply. There is also a script to launch all the workers and monitor them while running that provides a central âjobâ to wait for on clusters. | 1605.02688#38 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 39 | Two ways of performing the updates on the central parameters are currently implemented: Asynchronous SGD (ASGD), similar to Downpour SGD [22], and Elastic Averaging SGD (EASGD) [23]. Other algorithms can be added by implementing additional parameter synchronization rules.
# B. Faster compilation of graphs
# 1. Faster, simpler optimizer
As mentioned in Section II B 1, some sets of optimizations are pre-deï¬ned and can be easily speciï¬ed. One of these optimizers, âfast_compileâ, has recently been upgraded to include the optimizations that transfer computation to a GPU, as well as the optimizations necessary to make those optimizations apply. This drastically shortens the graph optimization time, at the cost of a slightly slower execution time and increased memory usage. That option can speed up the development or prototyping phase of a model, allowing the developer to iterate faster.
# 2. Swapping updates without recompiling
It is now possible to copy functions using the function.copy() method. This can be useful when creating functions that are similar but use different shared variables or update parameters, for instance when creating test and validation functions. Most importantly, the optimized graph of the original function is copied, meaning compilation only occurs once. | 1605.02688#39 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 40 | The interface for copy lets users specify which shared variables to swap, and whether or not updates are carried over. It is also possible to have copied functions share intermediate storage in memory (storage that is not input or output). When this is combined with disabled garbage collection, this can increase execution speed and save memory.
# Save and reload optimized graphs
Optimized computation graphs, such as the ones in Theano functions, can now be serialized using the pickle module, and get de-serialized without being optimized again. It is possible to force the re-optimization, for instance if the set of optional dependencies available has changed between saving and reloading, in which case the function may not run (if a dependency has been removed) or be sub-optimal (if one has been added). This is especially useful when check-pointing and restoring running experiments. Note that the C++ or CUDA code may still need to be recompiled.
# C. Visualization, debugging, and diagnostic tools
Since the deï¬nition of Theano functions is separate from their execution, some speciï¬c tools have been developed to help users visualize parts or the whole of the computation graph, pinpoint the origin of errors, and understand what is happening at execution time.
13 https://github.com/mila-udem/platoon
10
Interactive visualization with d3viz | 1605.02688#40 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 41 | 13 https://github.com/mila-udem/platoon
10
Interactive visualization with d3viz
Interactive visualization of computation graphs is now possible with the d3viz module, which extends Theanoâs printing module. Instead of outputting a text representation (like debugprint) or creating a static picture (like pydotprint), it creates an HTML ï¬le, which can be opened with current web browsers. An example is shown in Figure 1.
x dmatrix dvector 7 dmatrix (po1a2)) SoftmaxWithBias tmaxWith Node: apply node . Apply: SoftmaxWithBias dmatrix Time: 0.2 ms/1.8 % dvector
FIG. 1. Interactive graph visualization with d3viz. Proï¬ling colors have been activated, with redder nodes corresponding to longer computation times. Blue arrows indicate a node returns a view of the input, red arrows indicate a destroyed input.
Several features are supported. Users can zoom different regions, move graphs via drag and drop, and position nodes both manually and automatically. The visualisation can retrieve additional information about nodes and edges such as their data type or deï¬nition in the source code, edit node labels and visualize proï¬ling information. Nested graphs such as OpFromGraph nodes can also be explored by expanding or shrinking the nodes as needed. | 1605.02688#41 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 42 | Internally, d3viz represents a compute graph in the Graphviz DOT language, using the pydot package, and deï¬nes a front-end based on the d3.js library to visualize it. However, any other Graphviz front-end can be used, which allows to export graphs to different formats such as PNG and PDF.
# 2. Test values
Detecting errors in the way a mathematical expression is implemented in Theano can be a challenge, since it is not possible to directly map an intermediate Variable node to the value that will be associated to it at execution time. To mitigate this problem, it is possible to associate a test value to input variables, and to compute automatically values associated to intermediate variables as soon as they are deï¬ned. This makes it much easier to detect shape mismatches, for instance, or unexpected values.
Note that these values are computed only once, when the graph is built. That means that stability optimizations will not be applied to these values, so NaN (not-a-number) values could be produced during that phase, even if they would not be present when evaluating the optimized graph.
# 3. NanGuardMode | 1605.02688#42 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 43 | # 3. NanGuardMode
A frequent symptom of issues when optimizing a model is the appearance of NaN (not-a-number), inï¬nity, or very large values. They can indicate a wide range of issues, e.g., use of un-initialized memory, lack of numerical stability in the computation, divergence of the algorithm itself.
To help diagnosing the appearance of such values, NanGuardMode is an instrumented version of the runtime environment that can check the values of inputs and outputs of each Apply node during execution, and raise an error when some problematic values are detected.
11
# 4. The PdbBreakPoint Op
PdbBreakPoint is an Op designed to check the value of a condition, which is a symbolic expression, during the execution of a Theano function. If the condition is met, then the program will drop into the Python debugger (pdb), and make available the values associated to a list of pre-deï¬ned monitored variables. This is especially useful when something goes wrong during the training of a model, but only after a number of iterations, so it is not practical to log all values all the time.
# 5. Keeping the creation stack trace | 1605.02688#43 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 44 | # 5. Keeping the creation stack trace
When a variable is created, part of the stack trace is recorded, in particular the line of the call that created it. For instance, if variable z is created by calling z = a + b, then the line where that expression is called is associated to z. If evaluating that expression fails, for instance because a and b have incompatible shapes, then the error message will mention that ï¬le, line, and line number.
A challenge of that mechanism is that, when optimizations are applied, the replacement variables are not created at the same In fact, they are created inside the place as the ones they replace (or that âcorrespondâ to them in a more general sense). optimization, so no stack trace is associated to them. For instance, if the expression above is optimized to move a and b to a GPU, and z gets replaced by host_from_gpu(gpu_z) where gpu_z = gpu_add(gpu_a, gpu_b), then the replacement for z can easily retain the original stack trace, but gpu_z would not.
To improve this feature, we are currently in the process of going through all optimizations, so that they assign the creation stack trace of the original variable (or variables) to the âcorrespondingâ or equivalent one when they create replacements or new intermediate variables.
# IV. BENCHMARKS | 1605.02688#44 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 45 | # IV. BENCHMARKS
This section aims at giving a sense of the performance might expect from Theano against some of its largest competitors among machine learning research software, on different kinds of models. We used publicly-available software to compare against, when possible. We have made some of the benchmarking code public as well already, and will try to provide the remaining code as well in the future.
The goal of having more extensive benchmarks, on a wider variety of models and frameworks, is more easily attained by online projects, that can provide a picture more up-to-date. Among these projects, we can cite convnet-benchmarks,14 rnn- benchmarks,15 and hopefully DeepMark16 in the future.
We benchmarked Theano against Torch and TensorFlow (Section IV A), on three kinds of popular machine learning models: convolutional networks (Section IV B), recurrent neural networks (Section IV C), and recurrent neural networks for sequence-to- sequence mapping (Section IV D). Finally, we show how the computation speed scales when using multiple GPUs with Platoon (Section IV E).
# A. Setup | 1605.02688#45 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 46 | # A. Setup
All the benchmarks were run on a NVIDIA Digits DevBox, with 4 Titan X GPUs, and a Core i7-5930K CPU. All the benchmarks except for data-parallelism were run on only one GPU, which was not the one used for running the X server (using CUDA_VISIBLE_DEVICES). We used Cuda 7.5.17, with cuDNN v4 (version 4007), and data type ï¬oat32, for all frameworks and all experiments.
The compared software were installed as follow:
⢠Theano was installed from the development version, at commit 1bd371c. The following conï¬guration ï¬ags were used: floatX=float32, lib.cnmem=0.45, device=gpu0, optimizer_including=unsafe, dnn.conv.algo_fwd=time_once, dnn.conv.algo_bwd_filter=time_once, dnn.conv.algo_bwd_data=time_once. For fast compile experiments, the additional option optimizer=fast_compile was provided.
TensorFlow 0.8 was installed from the binary package. ⢠Torch7 was installed from https://github.com/torch/distro at commit ffffc39. | 1605.02688#46 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 47 | TensorFlow 0.8 was installed from the binary package. ⢠Torch7 was installed from https://github.com/torch/distro at commit ffffc39.
# 14 https://github.com/soumith/convnet-benchmarks/ 15 https://github.com/glample/rnn-benchmarks 16 https://github.com/DeepMark/deepmark
12
# B. Convolutional networks
We measure the performance of four different convolutional models, that have been successfully used on the Imagenet dataset:
AlexNet, the one-column variant from [24], with a batch size of 128; ⢠OverFeat, the fast variant from [25], with a batch size of 128; ⢠VGG, also known as OxfordNet, model A [26], with a batch size of 64; ⢠GoogLeNet V1 [27], with a batch size of 128.
We used the code from https://github.com/soumith/convnet-benchmarks at commit 84b5bb1 for Theano, Torch, and TensorFlow. We report the processing time per minibatch, for the forward and the backward pass.
HM Theano Hi Theano (fast_compile) HB Torch GG) TensorFlow Time/minibatch (ms) AlexNet OverFeat VGG GoogLeNet | 1605.02688#47 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 48 | HM Theano Hi Theano (fast_compile) HB Torch GG) TensorFlow Time/minibatch (ms) AlexNet OverFeat VGG GoogLeNet
FIG. 2. Processing time for convolutional networks on Imagenet (milliseconds per batch, lower is better). Dark colors show forward compu- tation time, pale colors show backward time.
The results, presented in Figure 2, show that Theano is slightly slower than Torch and TensorFlow, but the performance is comparable, both for the forward and the backward passes. Furthermore, using the fast_compile optimizer shows a slow-down between 10% and 25% only, which is a reasonable trade-off when developing or exploring a new model.
# C. Recurrent neural networks: LSTM on Penn Treebank
To showcase recurrent network models, we benchmarked variants of the LSTM model applied to the Penn Treebank dataset described in [28]. We compared:
the Torch implementation available at https://github.com/wojzaremba/lstm; ⢠the TensorFlow implementation showcased at https://www.tensorï¬ow.org/versions/r0.8/tutorials/recurrent/;17 and ⢠the Theano implementation available at https://github.com/caglar/rnn benchmarks.
We measured words per second during training, and report results on the following models: | 1605.02688#48 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 49 | We measured words per second during training, and report results on the following models:
Small: Single Layer, 200 hidden units, sequence length: 20; ⢠Medium: Single Layer, 600 hidden units, sequence length: 40; ⢠Large: Two Layers, 650 hidden units each, sequence length: 50.
All three models used dropout on non-recurrent connections during training, following [28]. The batch size was set to 20.
Figure 3 shows that Theano comes second behind TensorFlow for the small model, but is slightly faster on the medium and large model. Torch was slower than Theano on all three models, and perhaps more surprisingly, slower than the fast compile version of Theano on the two larger models.
17 Code at https://github.com/tensorï¬ow/tensorï¬ow/tree/master/tensorï¬ow/models/rnn/ptb
13
Hi Theano HM Theano (fast_compile) HB Torch G3 Tensorflow 1000 Words / sec Small Medium Large
FIG. 3. Processing speed for different LSTM models on the Penn Treebank data set (words per second, higher is better).
# D. Sequence-to-sequence: Caption generation from video | 1605.02688#49 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 50 | # D. Sequence-to-sequence: Caption generation from video
In this section, we use the sequence-to-sequence mapping model from [29]. The input is a series of video frames and the output is a one-sentence English description of the input. Each input video frame is preprocessed by a GoogLeNet that was pre-trained for classiï¬cation on ImageNet. The representation of the frame is thus a 1024 vector. The entire input is therefore represented by (M, F, 1024) where M is the minibatch size, and F is the number of frames. The output size is (M, L), where M is the minibatch size and L the sentence length (padding is used within a minibatch to ensure the same length, but different minibatches could have different L). Speciï¬cally, the model is written as P (S|V ), an LSTM on the sentence S, conditioned on the video V . V is a weighted sum of frames representations. | 1605.02688#50 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 51 | The original code for [29] is available at https://github.com/yaoli/arctic-capgen-vid. We used simpliï¬ed versions, in Theano and TensorFlow, instrumented for proï¬ling, which will be made public in the future. There was no publicly available implemen- tation in Torch. Theano with fast compile could not run because it was requiring too much memory. We report the processing time per minibatch, for the forward and backward passes, using three different batch sizes.
wm 1000 & =~ 800 com U 7% 600 2 £ 400 ⬠o@ 200 £ E 0 32 64 128 Batch size
FIG. 4. Processing time for generating word sequences from video representations (milliseconds per batch, lower is better). Dark colors show forward computation time, pale colors show backward time.
Figure 4 shows a small advantage to Theano for the forward pass, but a disadvantage for the backward pass. The total time was comparable overall, with Theano being slightly faster on smaller batches, and TensorFlow being faster on larger ones. As expected, the time per minibatch grows slower than the minibatch size, because the potential for parallel computation is greater with larger batches.
14
# E. Data parallelism for LSTM | 1605.02688#51 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 52 | 14
# E. Data parallelism for LSTM
We re-use the models from Section IV C, this time using Platoon to train on multiple GPUs on the same machine, using ASGD. We report results for 2 GPUs (using devices gpu1 and gpu2) and 4 GPUs, compared against the results on 1 GPU obtained without Platoon and reported in Section IV C. We measured the overall processing speed (words per second) during training when synchronizing the models after every minibatch, and when synchronizing only every 100 batches. The benchmarking code using Platoon will be made public soon.
Sync every 100 batches Sync every batch 60 50 1000 Words / sec Ww fo} 0 Small Medium Large Small Medium Large
FIG. 5. Processing speed using multiple GPUs with Platoon on different LSTM models, synchronizing after each batch (left) and every 100 batches (right) (1000 words per second, higher is better). | 1605.02688#52 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 53 | Figure 5 shows a consistent increase in processing speed when adding more GPUs. As can be seen on the left, communication and synchronization overhead make that scaling sub-linear when synchronizing after every single batch, we found a speed-up between 1.6 and 1.7 for 2 GPUs and around 3.2 for 4 GPUs across all three models. Synchronizing only every 100 batches, on the right, brings the computation speed-up close to the theoretical optimum, at 2 for 2 GPUs and between 3.9 and 4 for 4 GPUs.
# V. LIMITATIONS AND CHALLENGES
Despite the progress made in recent years and our best efforts, there remain some limitations or shortcomings in Theano. Some of these issues have been addressed by competing frameworks mentioned in Section II E, and by other projects like CGT (Computation Graph Toolkit).18
# A. Limitations from Python | 1605.02688#53 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 54 | # A. Limitations from Python
Since Theano uses Python as its core language, and uses NumPy arrays and other Python objects to store values, it is affected by Pythonâs limitations. The main one is the Python GIL, that limits concurrent execution of threads. We have seen that it is possible to make single-threaded execution fast by compiling binary modules that are then loaded in Python (Sections II B 3 and II C), and it would also be possible to release the GIL during the execution of these functions. However, the GIL has to be acquired again each time references to Python objects are added or removed, when using the C API of Python and NumPy. Since the execution of such functions is usually quite short, most threads would spend their time waiting for the lock instead of performing actual computation.
Since Python has a concept of threads and expects to be in charge of threading, it is also not possible to launch different, independent Python interpreters in different threads of the same process, as is possible with Lua for instance.
To avoid that issue, we could use a different n-dimensional array structure, that is accessible directly from C++ without actually being a Python object, like the one libgpuarray provides on the GPU. It would require Theano to explicitly manage
18 http://rll.berkeley.edu/cgt/
15 | 1605.02688#54 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 55 | 18 http://rll.berkeley.edu/cgt/
15
memory allocation and deallocation, in a thread-safe way. It would also require to rewrite all the C++ and CUDA code for existing Ops, so that they use a different interface for reading their input data and writing their output data. Finally, it could make it harder to create new Ops by integrating existing Python code.
# B. Graph optimization time
The execution time of the graph optimization phase is not scaling well with graph size. Currently, it is scaling supra-linearly relative to the number of nodes. One issue is that some groups of local optimizations try to apply over and over, until none of them can be applied any more, and the graph stops changing. In practice, it can force a number of passes through the whole graph that becomes bigger for bigger graphs (the chances of some local optimization applying somewhere are higher).
An option would be to completely reorganize the existing optimizations so that they are more lightweight, and can be applied in a ï¬xed number of passes through the graph. It could be possible, for instance, to use a one-pass or two-pass optimization phase, like CGT does. Doing that without any regressions in the stability optimizations could be a large-scale project.
# C. Code compilation time | 1605.02688#55 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 56 | # C. Code compilation time
Currently, the same Theano Op can generate a large quantity of different C++ or CUDA modules, depending on its properties at compile time, such as the data type of inputs and outputs, whether it will run in place, and other ï¬ags determining its behaviour. Compiling and loading those modules can take time and add a load on the ï¬le system.
To alleviate those issues, it would be possible in most cases to pass that information dynamically at runtime, instead of hard- coding it in the generated code. This approach is already being used in the new back-end to specify which GPU should be used for the execution of a particular Apply node, but it could be generalized.
# D. Loops and control-ï¬ow structures
Using Scan for loops, and the ifelse lazy Op for conditionals, has proven a useful way of expressing control-ï¬ow operations. However, with an increasing need for more ï¬exibility (attention mechanisms, nested loops, recursive loops, changes in shape between iterations of the same loop), we may need a more principled way of expressing these structures. | 1605.02688#56 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 57 | One appealing way would be to use switch and merge Apply nodes in the computation graph, like in a dataï¬ow graph [13]. This is the approach taken by TensorFlow [5] for symbolic loops. This would require adding support for cycles in the computation graph in these circumstances, extending the runtime to be able to recompute values inside the loop, and rewriting all the graph optimizations currently existing for Scan, including the ones limiting memory consumption.
# E. Multi-node parallelism
Scaling model execution and training to multiple machines is outside of the scope of Theanoâs core, but additional packages could be developed to interface with Theano, in the same way Platoon does for multiple GPUs in a single node. In fact, tools like parameter servers and coordinators do not have to be speciï¬c to Theano, and could be common to different frameworks.
# Improving memory usage
Given the limited availability of on-board GPU memory, memory consumption is often a bottleneck for training machine learning algorithms. This can limit the size and modelling power of trainable models, and make the processing power of GPUs under-used, for instance when batch sizes have to be reduced. In addition to storing intermediate values in a lower-precision format (for instance, storing data as ï¬oat16 is supported in Theanoâs new GPU back-end), different options could be explored and combined: | 1605.02688#57 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 58 | ⢠Change the order of execution of computations, so the peak memory usage is reduced. This can be done statically before the function is executed, or dynamically, for instance by detecting that memory is insufï¬cient and waiting for some other computation to ï¬nish and free intermediate values.
⢠Move intermediate values to the main (CPU) memory, or to another GPUâs memory, if it is not needed for a while, and transfer it back before it is used again. This method has been successfully implemented by [30].
⢠Free intermediate values, and recompute them when they are needed again. This approach has been used in [31], and can be especially useful for fast operations that have large outputs.
16
# G. The future of gradient-based computation frameworks | 1605.02688#58 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 59 | 16
# G. The future of gradient-based computation frameworks
Tools like Theano and TensorFlow are compilers for mathematical expressions, in that they require the code (or computation graph) to be deï¬ned ï¬rst, and then executed. On the other hand, Torch works more like an interpreter: the computation is done as soon as the expression is called. It could be interesting to explore how to apply JIT (just-in-time) compiler ideas to the computation graph, to combine the immediate response and ï¬exibility of an interpreter (including using control ï¬ow statements like if, for, while, from the language directly), and the performance gains of a compiler when an expression has to be evaluated multiple times.
Most machine-learning frameworks can now share efï¬cient implementations of GPU kernels, such as the ones published by NVIDIA (cuDNN) and Nervana. Graph optimizations could be another component shared between projects, maybe through a common language to deï¬ne computation graphs and such optimizations. It could be common to machine learning frameworks and computer algebra systems (CAS) such as SymPy [32] and SympyCore.19
# VI. CONCLUSION | 1605.02688#59 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 60 | # VI. CONCLUSION
Theano pioneered ideas for efï¬cient gradient-based computation that are now part of most mainstream machine-learning research libraries, for instance combining a high-level scripting language with highly-optimized computation kernels, especially using GPUs, symbolic computation graph, and symbolic differentiation. Some other features of Theano, like graph rewriting and optimizations, and automatic generation and compilation of kernels, are starting to become more widely used as well.
Continuous improvements have been made to Theanoâs functionality, usability, and performance, for instance wrapping li- braries like cuDNN, and integrating ideas that have been successfully explored and implemented by other frameworks, like data parallelism and model parallelism for distributed computation. Computation performance is on par with other major research software, like Torch and TensorFlow.
There are ways to improve Theano (and other frameworks as well) by taking inspiration from other machine learning software (sometimes more experimental). Longer-term improvements could be the result of collaborations with other ï¬elds, for instance CAS, and language and compiler design, in order to build a next generation of mathematical computation software.
# ACKNOWLEDGMENTS
We acknowledge the support of the following organizations for research funding and computing support: NSERC, Calcul Qu´ebec, Compute Canada, the Canada Research Chairs, and CIFAR. | 1605.02688#60 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 62 | The authors would like to thank all the other committers to Theano: Faruk Ahmed, Diogo Moitinho de Almeida, Hani Almousli, Andrea, Martin Andrews, John Arevalo, Martin Arjovsky, Kai Arulkumaran, Ben Athiwaratkun, bbabeshkin, Markus Beissinger, Sebastian Berg, Thierry Bertin-Mahieux, Lucas Beyer, Merlijn Blaauw, J¨org Bornschein, Ethan Buchman, Bogdan Budescu, Yaroslav Bulatov, Liwei Cai, Brian Cheung, Claude Coulombe, Frans Cronje, Rolf van Dam, Jonas Degrave, Misha Denil, Doug, Zach Dwiel, Ilya Dyachenko, Douglas Eck, Michael Eickenberg, Amir Elaguizy, eulerreich, Marco Fagiani, Raul Chandias Ferrari, Abraham Flaxman, Mike C. Fletcher, Piotr Frankowski, Geoffrey French, Adithya Ganesh, Dario Garcia, Sergii Gavrylov, Wojciech GÅogowski, Matthew Koichi Grimes, gw0, Christophe Van Gysel, | 1605.02688#62 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 63 | Dario Garcia, Sergii Gavrylov, Wojciech GÅogowski, Matthew Koichi Grimes, gw0, Christophe Van Gysel, Yaroslav Halchenko, Tim Head, Hei, Jonathan Ho, Paul Hollensen, Andre Georg Holzner, Liang-Chi Hsieh, Eric Hunsberger, Jonathan J. Hunt, Vlad Ionescu, Andy Jiang, jojolalpin, joncrall, Yi-Lin Juang, Vik Kamath, Moslem Kazemi, Kevin Keraudren, Robert Kern, Marius Killinger, Taesup Kim, Jey Kottalam, Stefan Krastanov, Gokula Krishnan, Matthias K¨ummerer, Kosuke Kusano, Micky Latowicki, Eric Laufer, Sergei Lebedev, R´emy L´eone, Wei Li, Peng Liu, Jakob Lombacher, Gilles Louppe, Jan-Matthis L¨uckmann, Michael I. Mandel, Daniel Maturana, Sergey Matyunin, Madison May, Ben McCann, Clay McLeod, Thomas Mesnard, Gr´egoire Mesnil, Luke Metz, Kyle Meyer, Marco De Nadai, Anchit | 1605.02688#63 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 64 | Ben McCann, Clay McLeod, Thomas Mesnard, Gr´egoire Mesnil, Luke Metz, Kyle Meyer, Marco De Nadai, Anchit Navelkar, Alassane Ndiaye, Huy Nguyen, Michael Opitz, Johannes Otterbach, Wei Ouyang, Daniil Pakhomov, Seon-Wook Park, F´abio Perez, Steven Pigeon, Nicolas Pinto, Zach Ploskey, Bhavishya Pohani, Ben Poole, Rahul, Sirisha Rambhatla, Kashif Rasul, Julien Rebetez, Marc-Antoine Rondeau, Tim Salimans, Adam Salvail, Joao Felipe Santos, Utkarsh Saxena, Ludwig Schmidt-Hackenberg, Ilan Schnell, Hannes Schulz, Anish Shah, Saatvik Shah, Shai, Yurii Shevchuk, Scott Sievert, Søren Kaae Sønderby, spotted1234, Graham Taylor, Texot, theaverageguy, Martin Thoma, Carl | 1605.02688#64 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 65 | 19 https://github.com/pearu/sympycore
17
Thom´e, Chiheb Trabelsi, Matthew Trentacoste, Christos Tsirigotis, Karen Ullrich, Prayag Verma, Karel Vesely, Mang Wang, XterNalz, Yu Yang, yobibyte, Jason Yosinski, Lee Zamparo, John Zedlewski, Albert Zeyer, and ziyuang.
[1] James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio, âTheano: A CPU and GPU math expression compiler,â in Proceedings of the Python for Scientiï¬c Computing Conference (SciPy) (2010).
[2] James Bergstra, Fr´ed´eric Bastien, Olivier Breuleux, Pascal Lamblin, Razvan Pascanu, Olivier Delalleau, Guillaume Desjardins, David Warde-Farley, Ian J. Goodfellow, Arnaud Bergeron, and Yoshua Bengio, âTheano: Deep learning on GPUs with Python,â in Big Learning Workshop, NIPS (2011). | 1605.02688#65 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 66 | [3] Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio, âTheano: New features and speed improvements,â Deep Learning and Unsupervised Feature Learning Workshop, NIPS (2012). [4] Ronan Collobert, Koray Kavukcuoglu, and Cl´ement Farabet, âTorch7: A matlab-like environment for machine learning,â in Big Learning
Workshop, NIPS (2011). | 1605.02688#66 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 67 | Workshop, NIPS (2011).
[5] Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng, âTensorFlow: Large-scale machine learning on heterogeneous systems,â (2015), software available from tensorï¬ow.org.
[6] Stefan van der Walt, S. Chris Colbert, and Gael Varoquaux, âThe NumPy array: A structure for efï¬cient numerical computation,â Computing in Science and Eng. 13, 22â30 (2011). | 1605.02688#67 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
1605.02688 | 68 | [7] Eric Jones, Travis Oliphant, Pearu Peterson, et al., âSciPy: Open source scientiï¬c tools for Python,â (2001â), [Online; accessed 2016- 04-19].
[8] Ian J. Goodfellow, David Warde-Farley, Pascal Lamblin, Vincent Dumoulin, Mehdi Mirza, Razvan Pascanu, James Bergstra, Fr´ed´eric Bastien, and Yoshua Bengio, âPylearn2: A machine learning research library,â arXiv e-prints abs/1308.4214 (2013).
[9] Bart van Merri¨enboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-Farley, Jan Chorowski, and Yoshua Bengio, âBlocks and Fuel: Frameworks for deep learning,â arXiv e-prints abs/1506.00619 (2015). | 1605.02688#68 | Theano: A Python framework for fast computation of mathematical expressions | Theano is a Python library that allows to define, optimize, and evaluate
mathematical expressions involving multi-dimensional arrays efficiently. Since
its introduction, it has been one of the most used CPU and GPU mathematical
compilers - especially in the machine learning community - and has shown steady
performance improvements. Theano is being actively and continuously developed
since 2008, multiple frameworks have been built on top of it and it has been
used to produce many state-of-the-art machine learning models.
The present article is structured as follows. Section I provides an overview
of the Theano software and its community. Section II presents the principal
features of Theano and how to use them, and compares them with other similar
projects. Section III focuses on recently-introduced functionalities and
improvements. Section IV compares the performance of Theano against Torch7 and
TensorFlow on several machine learning models. Section V discusses current
limitations of Theano and potential ways of improving it. | http://arxiv.org/pdf/1605.02688 | The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang | cs.SC, cs.LG, cs.MS | 19 pages, 5 figures | null | cs.SC | 20160509 | 20160509 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.