id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
1604.06174#17 | Training Deep Nets with Sublinear Memory Cost | Speciï¬ cally, we can view each segment as a bulk operator that combines all the operations inside the segment together. The idea is illustrated in Fig. 4. The combined operator calculates the gradient by executing over the sub-graph that describes its internal computation. This view allows us to treat a series of operations as subroutines. The optimization within the sub-graph does not affect the external world. As a result, we can recursively apply our memory optimization scheme to each sub-graph. Pay Even Less Memory with Recursion Let g(n) to be the memory cost to do forward and backward pass on a n layer neural network. Assume that we store k intermediate results in the graph and apply the same strategy recursively when doing forward and backward pass on the sub-path. We have the following recursion formula. g(n) = k + g (n/(k + 1)) (2) Solving this recursion formula gives us g(n) = k logk+1(n) (3) | 1604.06174#16 | 1604.06174#18 | 1604.06174 | [
"1512.03385"
] |
1604.06174#18 | Training Deep Nets with Sublinear Memory Cost | 7 As a special case, if we set k = 1, we get g(n) = log2 n. This is interesting conclusion as all the existing implementations takes O(n) memory in feature map to train a n layer neural network. This will require O(log2 n) cost forward pass cost, so may not be used commonly. But it demonstrates how we can trade memory even further by using recursion. # 4.5 Guideline for Deep Learning Frameworks In this section, we have shown that it is possible to trade computation for memory and combine it with the system optimizations proposed in Sec 3. It is helpful for deep learning frameworks to Enable option to drop result of low cost operations. | 1604.06174#17 | 1604.06174#19 | 1604.06174 | [
"1512.03385"
] |
1604.06174#19 | Training Deep Nets with Sublinear Memory Cost | â ¢ Provide planning algorithms to give efï¬ cient memory plan. â ¢ Enable user to set the mirror attribute in the computation graph for memory optimization. While the last option is not strictly necessary, providing such interface enables user to hack their own memory optimizers and encourages future researches on the related directions. Under this spirit, we support the customization of graph mirror plan and will make the source code publicly available. # 5 Experiments # 5.1 Experiment Setup We evaluate the memory cost of storing intermediate feature maps using the methods described in this paper. We our method on top of MXNet [6], which statically allocate all the intermediate feature maps before computation. This enables us to report the exact memory cost spend on feature maps. Note that the memory cost of parameters and temporal memory (e.g. required by convolution) are not part of the memory cost report. We also record the runtime total memory cost by running training steps on a Titan X GPU. Note that all the memory optimizations proposed in this paper gives equivalent weight gradient for training and can always be safely applied. We compare the following memory allocation algorithms | 1604.06174#18 | 1604.06174#20 | 1604.06174 | [
"1512.03385"
] |
1604.06174#20 | Training Deep Nets with Sublinear Memory Cost | â ¢ no optimization, directly allocate memory to each node in the graph without any optimization. â ¢ inplace, enable inplace optimization when possible. â ¢ sharing, enable inplace optimization as well as sharing. This represents all the system opti- mizations presented at Sec. 3. â ¢ drop bn-relu, apply all system optimizations, drop result of batch norm and relu, this is only shown in convolutional net benchmark. â ¢ sublinear plan, apply all system optimizations, use plan search with Alg 3 to trade computa- tion with memory. | 1604.06174#19 | 1604.06174#21 | 1604.06174 | [
"1512.03385"
] |
1604.06174#21 | Training Deep Nets with Sublinear Memory Cost | # 5.2 Deep Convolutional Network We ï¬ rst evaluate the proposed method on convolutional neural network for image classiï¬ cation. We use deep residual network architecture [11] (ResNet), which gives the state of art result on this task. Speciï¬ cally, we use 32 batch size and set input image shape as (3, 224, 224). We generate different depth conï¬ guration of ResNet 1 by increasing the depth of each residual stage. We show the results in Fig. 5. | 1604.06174#20 | 1604.06174#22 | 1604.06174 | [
"1512.03385"
] |
1604.06174#22 | Training Deep Nets with Sublinear Memory Cost | We can ï¬ nd that the system optimizations introduced in Sec. 3 can help to reduce the memory cost by factor of two to three. However, the memory cost after optimization still exhibits a linear trend with respect to number of layers. Even with all the system optimizations, it is only possible to train a 200 layer ResNet with the best GPU we can get. On the other hand, the proposed algorithm gives a sub-linear trend in terms of number of layers. By trade computation with memory, we can train a 1000 layer ResNet using less than 7GB of GPU memory. # 1We count a conv-bn-relu as one layer | 1604.06174#21 | 1604.06174#23 | 1604.06174 | [
"1512.03385"
] |
1604.06174#23 | Training Deep Nets with Sublinear Memory Cost | 8 (a) Feature map memory cost estimation (b) Runtime total memory cost Figure 5: The memory cost of different allocation strategies on deep residual net conï¬ gurations. The feature map memory cost is generated from static memory allocation plan. We also use nvidia- smi to measure the total memory cost during runtime (the missing points are due to out of memory). The ï¬ gures are in log-scale, so y = αxβ will translate to log(y) = β log(x) + log α. We can ï¬ nd that the graph based allocation strategy indeed help to reduce the memory cost by a factor of two to three. More importantly, the sub-linear planning algorithm indeed gives sub-linear memory trend with respect to the workload. The real runtime result also conï¬ rms that we can use our method to greatly reduce memory cost deep net training. (a) Feature map memory cost estimation (b) Runtime total memory cost | 1604.06174#22 | 1604.06174#24 | 1604.06174 | [
"1512.03385"
] |
1604.06174#24 | Training Deep Nets with Sublinear Memory Cost | Figure 6: The memory cost of different memory allocation strategies on LSTM conï¬ gurations. System optimization gives a lot of memory saving on the LSTM graph, which contains a lot of ï¬ ne grained operations. The sub-linear plan can give more than 4x reduction over the optimized plan that do not trade computation with memory. # 5.3 LSTM for Long Sequences We also evaluate the algorithms on a LSTM under a long sequence unrolling setting. We unrolled a four layer LSTM with 1024 hidden states equals 64 over time. The batch size is set to 64. The input of each timestamp is a continuous 50 dimension vector and the output is softmax over 5000 class. This is a typical setting for speech recognition[17], but our result can also be generalized to other recurrent networks. Using a long unrolling step can potentially help recurrent model to learn long | 1604.06174#23 | 1604.06174#25 | 1604.06174 | [
"1512.03385"
] |
1604.06174#25 | Training Deep Nets with Sublinear Memory Cost | 9 (a) ResNet (b) LSTM Figure 7: The runtime speed of different allocation strategy on the two settings. The speed is measured by a running 20 batches on a Titan X GPU. We can see that using sub-linear memory plan incurs roughly 30% of additional runtime cost compared to linear memory allocation. The general trend of speed vs workload remains linear for both strategies. term dependencies over time. We show the results in Fig. 6. We can ï¬ nd that inplace helps a lot here. This is because inplace optimization in our experiment enables direct addition of weight gradient to a single memory cell, preventing allocate space for gradient at each timestamp. The sub-linear plan gives more than 4x reduction over the optimized memory plan. # Impact on Training Speed We also measure the runtime cost of each strategy. The speed is benchmarked on a single Titan X GPU. The results are shown in Fig. 7. Because of the double forward cost in gradient calculation, the sublinear allocation strategy costs 30% additional runtime compared to the normal strategy. By paying the small price, we are now able to train a much wider range of deep learning models. | 1604.06174#24 | 1604.06174#26 | 1604.06174 | [
"1512.03385"
] |
1604.06174#26 | Training Deep Nets with Sublinear Memory Cost | # 6 Conclusion In this paper, we proposed a systematic approach to reduce the memory consumption of the inter- mediate feature maps when training deep neural networks. Computation graph liveness analysis is used to enable memory sharing between feature maps. We also showed that we can trade the com- putation with the memory. By combining the techniques, we can train a n layer deep neural network with only O( n) memory cost, by paying nothing more than one extra forward computation per mini-batch. | 1604.06174#25 | 1604.06174#27 | 1604.06174 | [
"1512.03385"
] |
1604.06174#27 | Training Deep Nets with Sublinear Memory Cost | # Acknowledgement We thank the helpful feedbacks from the MXNet community and developers. We thank Ian Goodfellow and Yu Zhang on helpful discussions on computation memory tradeoffs. We would like to thank David Warde-Farley for pointing out the relation to gradient checkpointing. We would like to thank Nvidia for the hardware support. This work was supported in part by ONR (PECASE) N000141010672, NSF IIS 1258741 and the TerraSwarm Research Center sponsored by MARCO and DARPA. Chiyuan Zhang acknowledges the support of a Nuance Foundation Grant. | 1604.06174#26 | 1604.06174#28 | 1604.06174 | [
"1512.03385"
] |
1604.06174#28 | Training Deep Nets with Sublinear Memory Cost | 10 # References [1] Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Good- fellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. | 1604.06174#27 | 1604.06174#29 | 1604.06174 | [
"1512.03385"
] |
1604.06174#29 | Training Deep Nets with Sublinear Memory Cost | Tensor- Flow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorï¬ ow.org. [2] Amit Agarwal, Eldar Akchurin, Chris Basoglu, Guoguo Chen, Scott Cyphers, Jasha Droppo, Adam Eversole, Brian Guenter, Mark Hillebrand, Ryan Hoens, Xuedong Huang, Zhiheng Huang, Vladimir Ivanov, Alexey Kamenev, Philipp Kranen, Oleksii Kuchaiev, Wolfgang Manousek, Avner May, Bhaskar Mitra, Olivier Nano, Gaizka Navarro, Alexey Orlov, Marko Padmilac, Hari Parthasarathi, Baolin Peng, Alexey Reznichenko, Frank Seide, Michael L. Seltzer, Malcolm Slaney, Andreas Stolcke, Yongqiang Wang, Huaming Wang, Kaisheng Yao, Dong Yu, Yu Zhang, and Geoffrey Zweig. | 1604.06174#28 | 1604.06174#30 | 1604.06174 | [
"1512.03385"
] |
1604.06174#30 | Training Deep Nets with Sublinear Memory Cost | An introduction to computational networks and the computational network toolkit. Technical Report MSR-TR-2014-112, August 2014. [3] Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman. Compilers: Principles, Techniques, and Tools. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 1986. [4] Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improve- ments. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012. [5] James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, Guil- laume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientiï¬ c Computing Conference (SciPy), June 2010. Oral Presentation. [6] Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, , and Zheng Zhang. MXNet: | 1604.06174#29 | 1604.06174#31 | 1604.06174 | [
"1512.03385"
] |
1604.06174#31 | Training Deep Nets with Sublinear Memory Cost | A ï¬ exible and efï¬ cient machine learning library for heterogeneous distributed systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems (LearningSysâ 15), 2015. [7] Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, MarcAurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. Large scale distributed deep networks. In NIPS, 2012. [8] Ian Goodfellow, Yoshua Bengio, , and Aaron Courville. Deep learning. | 1604.06174#30 | 1604.06174#32 | 1604.06174 | [
"1512.03385"
] |
1604.06174#32 | Training Deep Nets with Sublinear Memory Cost | Book in preparation for MIT Press, 2016. [9] Andreas Griewank and Andrea Walther. Algorithm 799: Revolve: An implementation of checkpointing for the reverse or adjoint mode of computational differentiation. ACM Trans. Math. Softw., 26(1):19â 45, March 2000. [10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. [11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016. [12] Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735â 1780, November 1997. 11 | 1604.06174#31 | 1604.06174#33 | 1604.06174 | [
"1512.03385"
] |
1604.06174#33 | Training Deep Nets with Sublinear Memory Cost | [13] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32th International Conference on Machine Learning (ICMLâ 15), 2015. [14] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classiï¬ cation with deep In Advances in Neural Information Processing Systems 25, convolutional neural networks. pages 1097â | 1604.06174#32 | 1604.06174#34 | 1604.06174 | [
"1512.03385"
] |
1604.06174#34 | Training Deep Nets with Sublinear Memory Cost | 1105. 2012. [15] Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In S. Haykin and B. Kosko, editors, Intelligent Signal Pro- cessing, pages 306â 351. IEEE Press, 2001. [16] Minsoo Rhu, Natalia Gimelshein, Jason Clemons, Arslan Zulï¬ qar, and Stephen W Keckler. | 1604.06174#33 | 1604.06174#35 | 1604.06174 | [
"1512.03385"
] |
1604.06174#35 | Training Deep Nets with Sublinear Memory Cost | Virtualizing deep neural networks for memory-efï¬ cient neural network design. arXiv preprint arXiv:1602.08124, 2016. [17] Hasim Sak, Andrew W. Senior, and Franc¸oise Beaufays. Long short-term memory recur- rent neural network architectures for large scale acoustic modeling. In INTERSPEECH 2014, 15th Annual Conference of the International Speech Communication Association, Singapore, September 14-18, 2014, pages 338â 342, 2014. [18] Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. Training very deep networks. arXiv preprint arXiv:1507.06228, 2015. [19] Yu Zhang, Guoguo Chen, Dong Yu, Kaisheng Yao, Sanjeev Khudanpur, and James Glass. arXiv preprint Highway long short-term memory rnns for distant speech recognition. arXiv:1510.08983, 2015. | 1604.06174#34 | 1604.06174#36 | 1604.06174 | [
"1512.03385"
] |
1604.06174#36 | Training Deep Nets with Sublinear Memory Cost | # A Search over Budget B Alg. 3 allows us to generate an optimized memory plan given a single parameter B. This algorithm relies on approximate memory estimation for faster speed. After we get the plan, we can use the static allocation algorithm to calculate the exact memory cost. We can then do a grid search over B to ï¬ nd a good memory plan. To get the setting of the grid, we ï¬ rst run the allocation algorithm with B = 0, then run the xy. Here x and y are the outputs from Alg. 3 in the ï¬ rst run. allocation algorithm again with B = Here x is the approximate cost to store inter-stage feature maps and y is the approximate cost to run each stage. B = xy an estimation of each stageâ s memory cost. This can already give a good memory plan. | 1604.06174#35 | 1604.06174#37 | 1604.06174 | [
"1512.03385"
] |
1604.06174#37 | Training Deep Nets with Sublinear Memory Cost | We then set grid around B = xy to further reï¬ ne the solution. â â 2B] can already give good memory plans in the experiments. We implemented the allocation algorithm in python without any attempt to optimize for speed. Our code costs a few seconds to get the plans needed in the experiments. 12 | 1604.06174#36 | 1604.06174 | [
"1512.03385"
] |
|
1604.03168#0 | Hardware-oriented Approximation of Convolutional Neural Networks | 6 1 0 2 t c O 0 2 ] V C . s c [ 3 v 8 6 1 3 0 . 4 0 6 1 : v i X r a Accepted as a workshop contribution at ICLR 2016 # HARDWARE-ORIENTED APPROXIMATION OF CONVO- LUTIONAL NEURAL NETWORKS Philipp Gysel, Mohammad Motamedi & Soheil Ghiasi Department of Electrical and Computer Engineering University of California, Davis Davis, CA 95616, USA {pmgysel,mmotamedi,ghiasi}@ucdavis.edu # ABSTRACT | 1604.03168#1 | 1604.03168 | [
"1602.07360"
] |
|
1604.03168#1 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are arguably the most promising approach for reducing both execution time and power consumption. One of the most important steps in accelerator development is hardware-oriented model approximation. In this paper we present Ristretto, a model approximation framework that analyzes a given CNN with respect to numerical resolution used in representing weights and outputs of convolutional and fully connected layers. Ristretto can condense models by using ï¬ xed point arithmetic and representation instead of ï¬ oating point. Moreover, Ristretto ï¬ ne- tunes the resulting ï¬ | 1604.03168#0 | 1604.03168#2 | 1604.03168 | [
"1602.07360"
] |
1604.03168#2 | Hardware-oriented Approximation of Convolutional Neural Networks | xed point network. Given a maximum error tolerance of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit. The code for Ristretto is available. # INTRODUCTION The annually held ILSVRC competition has seen state-of-the-art classiï¬ cation accuracies by deep networks such as AlexNet by Krizhevsky et al. (2012), VGG by Simonyan & Zisserman (2015), GoogleNet (Szegedy et al., 2015) and ResNet (He et al., 2015). These networks contain millions of parameters and require billions of arithmetic operations. Various solutions have been offered to reduce the resource-requirement of CNNs. Fixed point arith- metic is less resource hungry compared to ï¬ oating point. Moreover, it has been shown that ï¬ xed point arithmetic is adequate for neural network computation (Hammerstrom, 1990). This observa- tion has been leveraged recently to condense deep CNNs. Gupta et al. (2015) show that networks on datasets like CIFAR-10 (10 images classes) can be trained in 16-bit. Further trimming of the same network uses as low as 7-bit multipliers (Courbariaux et al., 2014). Another approach by Courbariaux et al. (2016) uses binary weights and activations, again on the same network. The complexity of deep CNNs can be split into two parts. First, the convolutional layers contain more than 90% of the required arithmetic operations. | 1604.03168#1 | 1604.03168#3 | 1604.03168 | [
"1602.07360"
] |
1604.03168#3 | Hardware-oriented Approximation of Convolutional Neural Networks | By turning these ï¬ oating point operations into operations with small ï¬ xed point numbers, both the chip area and energy consumption can be sig- niï¬ cantly reduced. The second resource-intense layer type are fully connected layers, which contain over 90% of the network parameters. As a nice by-product of using bit-width reduced ï¬ xed point numbers, the data transfer to off-chip memory is reduced for fully connected layers. In this paper, we concentrate on approximating convolutional and fully connected layers only. | 1604.03168#2 | 1604.03168#4 | 1604.03168 | [
"1602.07360"
] |
1604.03168#4 | Hardware-oriented Approximation of Convolutional Neural Networks | Using ï¬ xed point arithmetic is a hardware-friendly way of approximating CNNs. It allows the use of smaller process- ing elements and reduces the memory requirements without adding any computational overhead such as decompression. Even though it has been shown that CNNs perform well with small ï¬ xed point numbers, there exists no thorough investigation of the delicate trade-off between bit-width reduction and accuracy loss. In this paper we present Ristretto, which automatically ï¬ nds a perfect balance between the bit-width reduction and the given maximum error tolerance. Ristretto performs a fast and fully automated trimming analysis of any given network. This post-training tool can be used for application-speciï¬ c trimming of neural networks. | 1604.03168#3 | 1604.03168#5 | 1604.03168 | [
"1602.07360"
] |
1604.03168#5 | Hardware-oriented Approximation of Convolutional Neural Networks | 1 Accepted as a workshop contribution at ICLR 2016 # 2 MIXED FIXED POINT PRECISION In the next two sections we discuss quantization of a ï¬ oating point CNN to ï¬ xed point. Moreover, we explain dynamic ï¬ xed point, and show how it can be used to further decrease network size while maintaining the classiï¬ cation accuracy. m bits Layer activation O89 C8 &000"â ¢ m+n+1 bits { So Ss m+n+2 bits { . c < : +) ce (4) i -â me oa fagae ad ae at ae fed v | v m+n+lg(x) bits { ee ee Bias 9 e) 32-bit adder n-bit : Truncate Truncate m bits I T Layer output Figure 1: Data path of quantized convolutional and fully connected layers. The data path of fully connected and convolutional layers consists of a series of MAC operations (multiplication and accumulation), as shown in Figure 1. The layer activations are multiplied with the network weights, and the results are accumulated to form the output. As shown by Qiu et al. (2016), it is a good approach to use mixed precision, i.e., different parts of a CNN use different bit-widths. In Figure 1, m and n refer to the number of bits for layer outputs and layer weights, respectively. Multiplication results are accumulated using an adder tree which gets thicker towards the end. The adder outputs in the ï¬ rst level are m + n + 2 bits wide, and the bit-width grows by 1 bit in each level. In the last level, the bit-width is m + n + lg2 x, where x is the number of multiplication operations per output value. In the last stage, the bias is added to form the layer output. For each network layer, we need to ï¬ nd the right balance between reducing the bit-widths (m and n) and maintaining a good classiï¬ cation accuracy. # 3 DYNAMIC FIXED POINT The different parts of a CNN have a significant dynamic range. In large layers, the outputs are the result of thousands of accumulations, thus the network parameters are much smaller than the layer outputs. Fixed point has only limited capability to cover a wide dynamic range. | 1604.03168#4 | 1604.03168#6 | 1604.03168 | [
"1602.07360"
] |
1604.03168#6 | Hardware-oriented Approximation of Convolutional Neural Networks | Dynamic fixed point (Williamson} [1991] 2014) is a solution to this problem. In dynamic fixed point, each number is represented as follows: (â 1)° -2-f!. are 2â . x;. Here B denotes the bit-width, s the sign bit, fl is the fractional length, and x the mantissa bits. The intermediate values in a network have different ranges. Therefor it is desirable to assign fixed point numbers into groups with constant fl, such that the number of bits allocated to the fractional part is constant within that group. Each network layer is split into two groups: one for the layer outputs, one for the layer weights. This allows to better cover the dynamic range of both layer outputs and weights, as weights are normally significantly smaller. On the hardware side, it is possible to realize dynamic fixed point arithmetic using bit shifters. Different hardware accelerators for deployment of neural networks have been proposed (Motamedi et al., 2016; Qiu et al., 2016; Han et al., 2016a). | 1604.03168#5 | 1604.03168#7 | 1604.03168 | [
"1602.07360"
] |
1604.03168#7 | Hardware-oriented Approximation of Convolutional Neural Networks | The ï¬ rst important step in accelerator design is the compression of the network in question. In the next section we present Ristretto, a tool which can condense any neural network in a fast and automated fashion. 2 Accepted as a workshop contribution at ICLR 2016 # 4 RISTRETTO: APPROXIMATION FRAMEWORK IN CAFFE From Caffe to Ristretto According to Wikipedia, Ristretto is â a short shot of espresso coffee made with the normal amount of ground coffee but extracted with about half the amount of waterâ | 1604.03168#6 | 1604.03168#8 | 1604.03168 | [
"1602.07360"
] |
1604.03168#8 | Hardware-oriented Approximation of Convolutional Neural Networks | . Similarly, our compressor removes the unnecessary parts of a CNN, while making sure the essence â the ability to predict image classes â is preserved. With its strong community and fast training for deep CNNs, Caffe (Jia et al., 2014) is an excellent framework to build on. Ristretto takes a trained model as input, and automatically brews a condensed network version. Input and output of Ristretto are a network description ï¬ le (prototxt) and the network parameters. Optionally, the quantized network can be ï¬ ne-tuned with Ristretto. The resulting ï¬ xed point model in Caffe-format can then be used for a hardware accelerator. Weight Analysis Determine statistical parameters for Activation Analysis Determine statistical parameters for Bit-Width Reduction Determine the required bit-width for different Fine-tuning Retrain fixed point network parameters effective quantization effective quantization layers t Test the Accuracy Using Training Set Review the effect Figure 2: Network approximation ï¬ | 1604.03168#7 | 1604.03168#9 | 1604.03168 | [
"1602.07360"
] |
1604.03168#9 | Hardware-oriented Approximation of Convolutional Neural Networks | ow with Ristretto. Quantization ï¬ ow Ristrettoâ s quantization ï¬ ow has ï¬ ve stages (Figure 2) to compress a ï¬ oating point network into ï¬ xed point. In the ï¬ rst step, the dynamic range of the weights is analyzed to ï¬ nd a good ï¬ xed point representation. For the quantization from ï¬ oating point to ï¬ xed point, we use round-nearest. | 1604.03168#8 | 1604.03168#10 | 1604.03168 | [
"1602.07360"
] |
1604.03168#10 | Hardware-oriented Approximation of Convolutional Neural Networks | The second step runs several thousand images in forward path. The generated layer activations are analyzed to generate statistical parameters. Ristretto uses enough bits in the integer part of ï¬ xed point numbers to avoid saturation of layer activations. Next Ristretto performs a binary search to ï¬ nd the optimal number of bits for convolutional weights, fully connected weights, and layer outputs. In this step, a certain network part is quantized, while the rest remains in ï¬ oating point. Since there are three network parts that should use independent bit-widths (weights of convolutional and fully connected layers as well as layer outputs), iteratively quantizing one network part allows us to ï¬ nd the optimal bit-width for each part. Once a good trade-off between small number representation and classiï¬ cation accuracy is found, the resulting ï¬ xed point network is retrained. Fine-tuning In order to make up for the accuracy drop incurred by quantization, the fixed point network is fine- tuned in Ristretto. During this retraining procedure, the network learns how to classify images with fixed point parameters. Since the network weights can only have discrete values, the main chal- lenge consists in the weight update. We adopt the idea of previous work (Courbariaux et al.|/2015) which uses full precision shadow weights. Small weight updates Aw are applied to the full precision weights w, whereas the discrete weights wâ are sampled from the full precision weights. The sam- pling during fine-tuning is done with stochastic rounding. This rounding scheme was successfully used by for weight updates of 16-bit fixed point networks. Ristretto uses the fine-tuning procedure illustrated in Figure [3] For each batch, the full precision weights are quantized to fixed point. During forward propagation, these discrete weights are used to compute the layer outputs y. Each layer / turns its input batch 2; into output y;, according to its function f; : (x;,wâ | 1604.03168#9 | 1604.03168#11 | 1604.03168 | [
"1602.07360"
] |
1604.03168#11 | Hardware-oriented Approximation of Convolutional Neural Networks | ) â y;. Assuming the last layer computes the loss, we denote f as the overall CNN function. 3 Accepted as a workshop contribution at ICLR 2016 Stochastic Round nearest sampling sampling w! Val data orn PN YT Apply param : t H i coe H fprop: a :Aw â Fullprecision yi _ A Full precision y, â fprop update params w : w= fiGuw') params w of, / : Validation / Fra ad 2) â accuracy bprop Figure 3: Fine-tuning with shadow weights. The left side shows the training process with full- precision shadow weights. On the right side the ï¬ | 1604.03168#10 | 1604.03168#12 | 1604.03168 | [
"1602.07360"
] |
1604.03168#12 | Hardware-oriented Approximation of Convolutional Neural Networks | ne-tuned network is benchmarked on the validation data set. Fixed point values are represented in orange. The goal of back propagation is to compute the error gradient δf /δw with respect to each ï¬ xed point parameter. For parameter updates we use the Adam rule by Kingma & Ba (2015). As an important observation, we do not quantize layer outputs to ï¬ xed point during ï¬ ne-tuning. We use ï¬ oating point layer outputs instead, which enables Ristretto to analytically compute the error gradient with respect to each parameter. In contrast, the validation of the network is done with ï¬ xed point layer outputs. To achieve the best ï¬ ne-tuning results, we used a learning rate that is an order of magnitude lower than the last full precision training iteration. Since the choice of hyper parameters for retraining is crucial (Bergstra & Bengio, 2012), Ristretto relies on minimal human intervention in this step. | 1604.03168#11 | 1604.03168#13 | 1604.03168 | [
"1602.07360"
] |
1604.03168#13 | Hardware-oriented Approximation of Convolutional Neural Networks | Fast ï¬ ne-tuning with ï¬ xed point parameters Ristretto brews a condensed network with ï¬ xed point weights and ï¬ xed point layer activations. For simulation of the forward propagation in hardware, Ristretto uses full ï¬ oating point for accumula- tion. This follows the thought of Gupta et al. (2015) and is conform with our description of the forward data path in hardware (Figure 2). During ï¬ ne-tuning, the full precision weights need to be converted to ï¬ xed point for each batch, but after that all computation can be done in ï¬ oating point (Figure 3). Therefore Ristretto can fully leverage optimized matrix-matrix multiplication routines for both forward and backward propagation. Thanks to its fast implementation on the GPU, a ï¬ xed point CaffeNet can be tested on the ILSVRC 2014 validation dataset (50k images) in less than 2 minutes (using one Tesla K-40 GPU). | 1604.03168#12 | 1604.03168#14 | 1604.03168 | [
"1602.07360"
] |
1604.03168#14 | Hardware-oriented Approximation of Convolutional Neural Networks | # 5 RESULTS In this section we present the results of approximating 32-bit ï¬ oating point networks by condensed ï¬ xed point models. All classiï¬ cation accuracies were obtained running the respective network on the whole validation dataset. We present approximation results of Ristretto for ï¬ ve different net- works. First, we consider LeNet (LeCun et al., 1998) which can classify handwritten digits (MNIST dataset). Second, CIFAR-10 Full model provided by Caffe is used to classify images into 10 different classes. Third, we condense CaffeNet, which is the Caffe version of AlexNet and classiï¬ es images into the 1000 ImageNet categories. Fourth, we use the BVLC version of GoogLeNet (Szegedy et al., 2015) to classify images of the same data set. Finally, we approximate SqueezeNet (Iandola et al., 2016), a recently proposed architecture with the classiï¬ cation accuracy of AlexNet, but >50X fewer parameters. Impact of dynamic ï¬ xed point We used Ristretto to quantize CaffeNet (AlexNet) into ï¬ xed point, and compare traditional ï¬ xed point with dynamic ï¬ xed point. | 1604.03168#13 | 1604.03168#15 | 1604.03168 | [
"1602.07360"
] |
1604.03168#15 | Hardware-oriented Approximation of Convolutional Neural Networks | To allow a simpler comparison, all layer outputs and network parameters share the same bit-width. Results show a good performance of static ï¬ xed point for as low as 18-bit (Figure 4). However, when reducing the bit-width further, the accuracy starts to drop signiï¬ cantly, while dynamic ï¬ xed point has a stable accuracy. 4 Accepted as a workshop contribution at ICLR 2016 Static vs Dynamic Fixed Point â +â Dynamic fixed point =-+-<@-== Integer length: 9-bit ===+-@=== Integer length: 10-bit --+----- Integer length: 11-bit Classification Accuracy % Bit-width Figure 4: Impact of dynamic ï¬ xed point: The ï¬ | 1604.03168#14 | 1604.03168#16 | 1604.03168 | [
"1602.07360"
] |
1604.03168#16 | Hardware-oriented Approximation of Convolutional Neural Networks | gure shows top-1 accuracy for CaffeNet on ILSVRC 2014 validation dataset. Integer length refers to the number of bits assigned to the integer part of ï¬ xed point numbers. We can conclude that dynamic ï¬ xed point performs signiï¬ cantly better for such a large network. With dynamic ï¬ xed point, we can adapt the number of bits allocated to integer and fractional part, according to the dynamic range of different parts of the network. We will therefore concentrate on dynamic ï¬ xed point for the subsequent experiments. Quantization of individual network parts In this section, we analyze the impact of quantization on different parts of a ï¬ oating point CNN. Table 1 shows the classiï¬ cation accuracy when the layer outputs, the convolution kernels or the parameters of fully connected layers are quantized to dynamic ï¬ xed point. In all three nets, the convolution kernels and layer activations can be trimmed to 8-bit with an absolute accuracy change of only 0.3%. Fully connected layers are more affected from trimming to 8-bit weights, the absolute change is maximally 0.9%. Interestingly, LeNet weights can be trimmed to as low as 2-bit, with absolute accuracy change below 0.4%. | 1604.03168#15 | 1604.03168#17 | 1604.03168 | [
"1602.07360"
] |
1604.03168#17 | Hardware-oriented Approximation of Convolutional Neural Networks | Table 1: Quantization results for different parts of three networks. Only one number category is cast to ï¬ xed point, and the remaining numbers are in ï¬ oating point format. Fixed point bit-width # 16-bit # 8-bit â # 4-bit # 2-bit LeNet, 32-bit ï¬ oating point accuracy: 99.1% Layer output CONV parameters FC parameters 99.1% 99.1% 98.9% 85.9% 99.1% 99.1% 99.1% 98.9% 99.1% 99.1% 98.9% 98.7% Full CIFAR-10, 32-bit ï¬ oating point accuracy: 81.7% Layer output CONV parameters FC parameters 81.6% 81.6% 79.6% 48.0% 81.7% 81.4% 75.9% 19.1% 81.7% 80.8% 79.9% 77.5% CaffeNet top-1, 32-bit ï¬ oating point accuracy: 56.9% Layer output CONV parameters FC parameters 56.8% 56.7% 06.0% 00.1% 56.9% 56.7% 00.1% 00.1% 56.9% 56.3% 00.1% 00.1% Fine-tuning of all considered network parts Here we report the accuracy of ï¬ ve networks that were condensed and ï¬ ne-tuned with Ristretto. All networks use dynamic ï¬ xed point parameters as well as dynamic ï¬ xed point layer outputs for convolutional and fully connected layers. LeNet performs well in 2/4-bit, while CIFAR-10 and | 1604.03168#16 | 1604.03168#18 | 1604.03168 | [
"1602.07360"
] |
1604.03168#18 | Hardware-oriented Approximation of Convolutional Neural Networks | 5 Accepted as a workshop contribution at ICLR 2016 the three ImageNet CNNs can be trimmed to 8-bit (see Table 2). Surprisingly, these compressed networks still perform nearly as well as their ï¬ oating point baseline. The relative accuracy drops of LeNet, CIFAR-10 and SqueezeNet are very small (<0.6%), whereas the approximation of the larger CaffeNet and GoogLeNet incurs a slightly higher cost (0.9% and 2.3% respectively). | 1604.03168#17 | 1604.03168#19 | 1604.03168 | [
"1602.07360"
] |
1604.03168#19 | Hardware-oriented Approximation of Convolutional Neural Networks | We hope we will further improve the ï¬ ne-tuning results of these larger networks in the future. The SqueezeNet architecture was developed by Iandola et al. (2016) with the goal of a small CNN that performs well on the ImageNet data set. Ristretto can make the already small network even smaller, so that its parameter size is less than 2 MB. This condensed network is well-suited for deployment in smart mobile systems. All ï¬ ve 32-bit ï¬ oating point networks can be approximated well in 8-bit and 4-bit ï¬ xed point. For a hardware implementation, this reduces the size of multiplication units by about one order of magni- tude. Moreover, the required memory bandwidth is reduced by 4â 8X. Finally, it helps to hold 4â 8X more parameters in on-chip buffers. The code for reproducing the quantization and ï¬ ne-tuning re- sults is available1. Table 2: Fine-tuned networks with dynamic ï¬ xed point parameters and outputs for convolutional and fully connected layers. The numbers in brackets indicate accuracy without ï¬ ne-tuning. LeNet (Exp 1) LeNet (Exp 2) Full CIFAR-10 SqueezeNet top-1 CaffeNet top-1 GoogLeNet top-1 Layer outputs 4-bit 4-bit 8-bit 8-bit 8-bit 8-bit CONV parameters 4-bit 2-bit 8-bit 8-bit 8-bit 8-bit FC parameters 4-bit 2-bit 8-bit 8-bit 8-bit 8-bit 32-bit ï¬ oating point baseline 99.1% 99.1% 81.7% 57.7% 56.9% 68.9% Fixed point accuracy 99.0% (98.7%) 98.8% (98.0%) 81.4% (80.6%) 57.1% (55.2%) 56.0% (55.8%) 66.6% (66.1%) A previous work by Courbariaux et al. (2014) concentrates on training with limited numerical pre- cision. They can train a dynamic ï¬ | 1604.03168#18 | 1604.03168#20 | 1604.03168 | [
"1602.07360"
] |
1604.03168#20 | Hardware-oriented Approximation of Convolutional Neural Networks | xed point network on the MNIST data set using just 7-bits to represent activations and weights. Ristretto doesnâ t reduce the resource requirements for training, but concentrates on inference instead. Ristretto can produce a LeNet network with 2-bit parameters and 4-bit activations. Our approach is different in that we train with high numerical precision, then quantize to ï¬ xed point, and ï¬ nally ï¬ ne-tune the ï¬ xed point network. Other works (Courbariaux et al., 2016; Rastegari et al., 2016) can reduce the bit-width even fur- ther to as low as 1-bit, using more advanced number encodings than dynamic ï¬ xed point. | 1604.03168#19 | 1604.03168#21 | 1604.03168 | [
"1602.07360"
] |
1604.03168#21 | Hardware-oriented Approximation of Convolutional Neural Networks | Ristrettoâ s strength lies in its capability to approximate a large number of existing ï¬ oating point models on chal- lenging data sets. For the ï¬ ve considered networks, Ristretto can quantize activations and weights to 8-bit or lower, at an accuracy drop below 2.3%, compared to the ï¬ oating point baseline. While more sophisticated data compression schemes could be used to achieve higher network size reduction, our approach is very hardware friendly and imposes no additional overhead such as de- compression. # 6 CONCLUSION AND FUTURE WORK In this work we presented Ristretto, a Caffe-based approximation framework for deep convolutional neural networks. The framework reduces the memory requirements, area for processing elements and overall power consumption for hardware accelerators. A large net like CaffeNet can be quan- tized to 8-bit for both weights and layer outputs while keeping the networkâ s accuracy change below 1% compared to its 32-bit ï¬ oating point counterpart. | 1604.03168#20 | 1604.03168#22 | 1604.03168 | [
"1602.07360"
] |
1604.03168#22 | Hardware-oriented Approximation of Convolutional Neural Networks | Ristretto is both fast and automated, and we release the code as an open source project. Ristretto is in its ï¬ rst development stage. We consider adding new features in the future: 1. Shared weights: Fetching cookbook indices from off-chip memory, instead of real values (Han et al., # 1https://github.com/pmgysel/caffe 6 Accepted as a workshop contribution at ICLR 2016 2016b). 2. Network pruning as shown by the same authors. 3. Network binarization as shown by Courbariaux et al. (2016) and Rastegari et al. (2016). These additional features will help to reduce the bit-width even further, and to reduce the computational complexity of trimmed networks. | 1604.03168#21 | 1604.03168#23 | 1604.03168 | [
"1602.07360"
] |
1604.03168#23 | Hardware-oriented Approximation of Convolutional Neural Networks | # REFERENCES Bergstra, J. and Bengio, Y. Random Search for Hyper-Parameter Optimization. The Journal of Machine Learning Research, 13(1):281â 305, 2012. Courbariaux, M., David, J.-P., and Bengio, Y. Training Deep Neural Networks with Low Precision Multiplications. arXiv preprint arXiv:1412.7024, 2014. Courbariaux, M., Bengio, Y., and David, J.-P. | 1604.03168#22 | 1604.03168#24 | 1604.03168 | [
"1602.07360"
] |
1604.03168#24 | Hardware-oriented Approximation of Convolutional Neural Networks | BinaryConnect: Training Deep Neural Networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pp. 3105â 3113, 2015. Courbariaux, M., Hubara, I., Soudry, D., El-Yaniv, R., and Bengio, Y. Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. arXiv preprint arXiv:1602.02830, 2016. | 1604.03168#23 | 1604.03168#25 | 1604.03168 | [
"1602.07360"
] |
1604.03168#25 | Hardware-oriented Approximation of Convolutional Neural Networks | Gupta, S., Agrawal, A., Gopalakrishnan, K., and Narayanan, P. Deep Learning with Limited Nu- merical Precision. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 1737â 1746, 2015. Hammerstrom, D. A VLSI Architecture for High-Performance, Low-Cost, On-chip Learning. In IJCNN International Joint Conference on Neural Networks, 1990, pp. 537â | 1604.03168#24 | 1604.03168#26 | 1604.03168 | [
"1602.07360"
] |
1604.03168#26 | Hardware-oriented Approximation of Convolutional Neural Networks | 544. IEEE, 1990. Han, S., Liu, X., Mao, H., Pu, J., Pedram, A., Horowitz, M. A., and Dally, W. J. EIE: Efï¬ cient In- ference Engine on Compressed Deep Neural Network. arXiv preprint arXiv:1602.01528, 2016a. Han, S., Mao, H., and Dally, W. J. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. In International Conference on Learning Representations, 2016b. | 1604.03168#25 | 1604.03168#27 | 1604.03168 | [
"1602.07360"
] |
1604.03168#27 | Hardware-oriented Approximation of Convolutional Neural Networks | He, K., Zhang, X., Ren, S., and Sun, J. Deep Residual Learning for Image Recognition. arXiv preprint arXiv:1512.03385, 2015. Iandola, F. N., Moskewicz, M. W., Ashraf, K., Han, S., Dally, W. J., and Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. arXiv:1602.07360, 2016. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., and Darrell, T. | 1604.03168#26 | 1604.03168#28 | 1604.03168 | [
"1602.07360"
] |
1604.03168#28 | Hardware-oriented Approximation of Convolutional Neural Networks | Caffe: Convolutional Architecture for Fast Feature Embedding. In Proceedings of the ACM International Conference on Multimedia, pp. 675â 678. ACM, 2014. Kingma, D. and Ba, J. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations, 2015. Krizhevsky, A., Sutskever, I., and Hinton, G. E. ImageNet Classiï¬ cation with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems, pp. 1097â 1105, 2012. LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE, 86(11):2278â 2324, 1998. Motamedi, M., Gysel, P., Akella, V., and Ghiasi, S. | 1604.03168#27 | 1604.03168#29 | 1604.03168 | [
"1602.07360"
] |
1604.03168#29 | Hardware-oriented Approximation of Convolutional Neural Networks | Design Space Exploration of FPGA-Based Deep Convolutional Neural Networks. In 2016 21st Asia and South Paciï¬ c Design Automation Conference (ASP-DAC), pp. 575â 580. IEEE, 2016. Qiu, J., Wang, J., Yao, S., Guo, K., Li, B., Zhou, E., Yu, J., Tang, T., Xu, N., Song, S., Wang, Y., and Yang, H. Going Deeper with Embedded FPGA Platform for Convolutional Neural Network. In Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pp. 26â 35, 2016. 7 | 1604.03168#28 | 1604.03168#30 | 1604.03168 | [
"1602.07360"
] |
1604.03168#30 | Hardware-oriented Approximation of Convolutional Neural Networks | Accepted as a workshop contribution at ICLR 2016 Rastegari, M., Ordonez, V., Redmon, J., and Farhadi, A. XNOR-Net: ImageNet Classiï¬ cation Using Binary Convolutional Neural Networks. arXiv preprint arXiv:1603.05279, 2016. Simonyan, K. and Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recog- nition. In International Conference on Learning Representations, 2015. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1â | 1604.03168#29 | 1604.03168#31 | 1604.03168 | [
"1602.07360"
] |
1604.03168#31 | Hardware-oriented Approximation of Convolutional Neural Networks | 9, 2015. Williamson, D. Dynamically scaled ï¬ xed point arithmetic. In IEEE Paciï¬ c Rim Conference on Communications, Computers and Signal Processing, 1991, pp. 315â 318. IEEE, 1991. 8 | 1604.03168#30 | 1604.03168 | [
"1602.07360"
] |
|
1604.00289#0 | Building Machines That Learn and Think Like People | 6 1 0 2 v o N 2 ] I A . s c [ 3 v 9 8 2 0 0 . 4 0 6 1 : v i X r a In press at Behavioral and Brain Sciences. # Building Machines That Learn and Think Like People Brenden M. Lake,1 Tomer D. Ullman,2,4 Joshua B. Tenenbaum,2,4 and Samuel J. Gershman3,4 1Center for Data Science, New York University 2Department of Brain and Cognitive Sciences, MIT 3Department of Psychology and Center for Brain Science, Harvard University 4Center for Brains Minds and Machines # Abstract Recent progress in artiï¬ cial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving perfor- mance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems diï¬ er from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. | 1604.00289#1 | 1604.00289 | [
"1511.06114"
] |
|
1604.00289#1 | Building Machines That Learn and Think Like People | Speciï¬ cally, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recog- nition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models. # 1 Introduction | 1604.00289#0 | 1604.00289#2 | 1604.00289 | [
"1511.06114"
] |
1604.00289#2 | Building Machines That Learn and Think Like People | Artiï¬ cial intelligence (AI) has been a story of booms and busts, yet by any traditional measure of success, the last few years have been marked by exceptional progress. Much of this progress has come from recent advances in â deep learning,â characterized by learning large neural-network-style models with multiple layers of representation. These models have achieved remarkable gains in many domains spanning object recognition, speech recognition, and control (LeCun, Bengio, & Hinton, 2015; Schmidhuber, 2015). In object recognition, Krizhevsky, Sutskever, and Hinton (2012) trained a deep convolutional neural network (convnets; LeCun et al., 1989) that nearly halved the error rate of the previous state-of-the-art on the most challenging benchmark to date. In the years since, convnets continue to dominate, recently approaching human-level performance on some object recognition benchmarks (He, Zhang, Ren, & Sun, 2015; Russakovsky et al., 2015; Szegedy et al., 2014). In automatic speech recognition, Hidden Markov Models (HMMs) have been the leading approach since the late 1980s (Juang & Rabiner, 1990), yet this framework has been chipped away piece by piece and replaced with deep learning components (Hinton et al., | 1604.00289#1 | 1604.00289#3 | 1604.00289 | [
"1511.06114"
] |
1604.00289#3 | Building Machines That Learn and Think Like People | 2012). Now, the leading approaches to speech recognition are fully neural network systems (Graves, Mohamed, & Hinton, 2013; Weng, Yu, Watanabe, & Juang, 2014). Ideas from deep learning have also been applied to learning complex control problems. V. Mnih et al. (2015) combined ideas from deep learning and reinforcement learning to make a â deep reinforcement learningâ algorithm that learns to play large classes of simple video games from just frames of pixels and the game score, achieving human or superhuman level performance on many of these games (see also Guo, Singh, Lee, Lewis, & Wang, 2014; Schaul, Quan, Antonoglou, & Silver, 2016; Stadie, Levine, & Abbeel, 2016). These accomplishments have helped neural networks regain their status as a leading paradigm in machine learning, much as they were in the late 1980s and early 1990s. The recent success of neural networks has captured attention beyond academia. In industry, companies such as Google and Facebook have active research divisions exploring these technologies, and object and speech recognition systems based on deep learning have been deployed in core products on smart phones and the web. The media has also covered many of the recent achievements of neural networks, often expressing the view that neural networks have achieved this recent success by virtue of their brain-like computation and thus their ability to emulate human learning and human cognition. In this article, we view this excitement as an opportunity to examine what it means for a machine to learn or think like a person. | 1604.00289#2 | 1604.00289#4 | 1604.00289 | [
"1511.06114"
] |
1604.00289#4 | Building Machines That Learn and Think Like People | We ï¬ rst review some of the criteria previously oï¬ ered by cognitive scientists, developmental psychologists, and AI researchers. Second, we articulate what we view as the essential ingredients for building such a machine that learns or thinks like a person, synthesizing theoretical ideas and experimental data from research in cognitive science. Third, we consider contemporary AI (and deep learning in particular) in light of these ingredients, ï¬ nding that deep learning models have yet to incorporate many of them and so may be solving some problems in diï¬ | 1604.00289#3 | 1604.00289#5 | 1604.00289 | [
"1511.06114"
] |
1604.00289#5 | Building Machines That Learn and Think Like People | erent ways than people do. We end by discussing what we view as the most plausible paths towards building machines that learn and think like people. This includes prospects for integrating deep learning with the core cognitive ingredients we identify, inspired in part by recent work fusing neural networks with lower-level building blocks from classic psychology and computer science (attention, working memory, stacks, queues) that have traditionally been seen as incompatible. Beyond the speciï¬ c ingredients in our proposal, we draw a broader distinction between two diï¬ er- ent computational approaches to intelligence. The statistical pattern recognition approach treats prediction as primary, usually in the context of a speciï¬ c classiï¬ cation, regression, or control task. In this view, learning is about discovering features that have high value states in common â a shared label in a classiï¬ cation setting or a shared value in a reinforcement learning setting â across a large, diverse set of training data. The alternative approach treats models of the world as pri- mary, where learning is the process of model-building. Cognition is about using these models to understand the world, to explain what we see, to imagine what could have happened that didnâ t, or what could be true that isnâ t, and then planning actions to make it so. The diï¬ | 1604.00289#4 | 1604.00289#6 | 1604.00289 | [
"1511.06114"
] |
1604.00289#6 | Building Machines That Learn and Think Like People | erence be- tween pattern recognition and model-building, between prediction and explanation, is central to our view of human intelligence. Just as scientists seek to explain nature, not simply predict it, we see human thought as fundamentally a model-building activity. We elaborate this key point with numerous examples below. We also discuss how pattern recognition, even if it is not the core of intelligence, can nonetheless support model-building, through â model-freeâ algorithms that learn through experience how to make essential inferences more computationally eï¬ cient. 2 | 1604.00289#5 | 1604.00289#7 | 1604.00289 | [
"1511.06114"
] |
1604.00289#7 | Building Machines That Learn and Think Like People | Before proceeding, we provide a few caveats about the goals of this article and a brief overview of the key ideas. # 1.1 What this article is not For nearly as long as there have been neural networks, there have been critiques of neural networks (Crick, 1989; Fodor & Pylyshyn, 1988; Marcus, 1998, 2001; Minsky & Papert, 1969; Pinker & Prince, 1988). While we are critical of neural networks in this article, our goal is to build on their successes rather than dwell on their shortcomings. We see a role for neural networks in developing more human-like learning machines: They have been applied in compelling ways to many types of machine learning problems, demonstrating the power of gradient-based learning and deep hierarchies of latent variables. Neural networks also have a rich history as computational models of cognition (McClelland, Rumelhart, & the PDP Research Group, 1986; Rumelhart, McClelland, & the PDP Research Group, 1986) â | 1604.00289#6 | 1604.00289#8 | 1604.00289 | [
"1511.06114"
] |
1604.00289#8 | Building Machines That Learn and Think Like People | a history we describe in more detail in the next section. At a more fundamental level, any computational model of learning must ultimately be grounded in the brainâ s biological neural networks. We also believe that future generations of neural networks will look very diï¬ erent from the current state-of-the-art. They may be endowed with intuitive physics, theory of mind, causal reasoning, and other capacities we describe in the sections that follow. More structure and inductive biases could be built into the networks or learned from previous experience with related tasks, leading to more human-like patterns of learning and development. Networks may learn to eï¬ ectively search for and discover new mental models or intuitive theories, and these improved models will, in turn, enable subsequent learning, allowing systems that learn-to-learn â using previous knowledge to make richer inferences from very small amounts of training data. It is also important to draw a distinction between AI that purports to emulate or draw inspiration from aspects of human cognition, and AI that does not. | 1604.00289#7 | 1604.00289#9 | 1604.00289 | [
"1511.06114"
] |
1604.00289#9 | Building Machines That Learn and Think Like People | This article focuses on the former. The latter is a perfectly reasonable and useful approach to developing AI algorithms â avoiding cognitive or neural inspiration as well as claims of cognitive or neural plausibility. Indeed, this is how many researchers have proceeded, and this article has little pertinence to work conducted under this research strategy.1 On the other hand, we believe that reverse engineering human intelligence can usefully inform AI and machine learning (and has already done so), especially for the types of domains and tasks that people excel at. Despite recent computational achievements, people are better than machines at solving a range of diï¬ cult computational problems, including concept learning, scene understanding, language acquisition, language understanding, speech recognition, etc. Other human cognitive abilities remain diï¬ cult to understand computationally, including creativity, common sense, and general purpose reasoning. As long as natural intelligence remains the best example of intelligence, we believe that the project of reverse engineering the human solutions to diï¬ cult computational problems will continue to inform and advance AI. Finally, while we focus on neural network approaches to AI, we do not wish to give the impres- sion that these are the only contributors to recent advances in AI. On the contrary, some of the 1In their inï¬ uential textbook, Russell and Norvig (2003) state that â The quest for â artiï¬ cial ï¬ ightâ | 1604.00289#8 | 1604.00289#10 | 1604.00289 | [
"1511.06114"
] |
1604.00289#10 | Building Machines That Learn and Think Like People | succeeded when the Wright brothers and others stopped imitating birds and started using wind tunnels and learning about aerodynamics.â (p. 3). 3 # Table 1: Glossary Neural network: A network of simple neuron-like processing units that collectively per- form complex computations. Neural networks are often organized into layers, including an input layer that presents the data (e.g, an image), hidden layers that transform the data into intermediate representations, and an output layer that produces a response (e.g., a label or an action). Recurrent connections are also popular when processing sequential data. Deep learning: A neural network with at least one hidden layer (some networks have dozens). Most state-of-the-art deep networks are trained using the backpropagation algo- rithm to gradually adjust their connection strengths. Backpropagation: Gradient descent applied to training a deep neural network. The gradient of the objective function (e.g., classiï¬ cation error or log-likelihood) with respect to the model parameters (e.g., connection weights) is used to make a series of small adjustments to the parameters in a direction that improves the objective function. Convolutional network (convnet): A neural network that uses trainable ï¬ lters instead of (or in addition to) fully-connected layers with independent weights. The same ï¬ lter is applied at many locations across an image (or across a time series), leading to neural networks that are eï¬ ectively larger but with local connectivity and fewer free parameters. Model-free and model-based reinforcement learning: Model-free algorithms di- rectly learn a control policy without explicitly building a model of the environment (re- ward and state transition distributions). Model-based algorithms learn a model of the environment and use it to select actions by planning. Deep Q-learning: A model-free reinforcement learning algorithm used to train deep neural networks on control tasks such as playing Atari games. A network is trained to approximate the optimal action-value function Q(s, a), which is the expected long-term cumulative reward of taking action a in state s and then optimally selecting future actions. Generative model: A model that speciï¬ es a probability distribution over the data. For instance, in a classiï¬ cation task with examples X and class labels y, a generative model speciï¬ | 1604.00289#9 | 1604.00289#11 | 1604.00289 | [
"1511.06114"
] |
1604.00289#11 | Building Machines That Learn and Think Like People | es the distribution of data given labels P (X y), as well as a prior on labels P (y), which can be used for sampling new examples or for classiï¬ cation by using Bayesâ rule to X) directly, possibly by using a compute P (y | neural network to predict the label for a given data point, and cannot directly be used to sample new examples or to compute other queries regarding the data. We will generally be concerned with directed generative models (such as Bayesian networks or probabilistic programs) which can be given a causal interpretation, although undirected (non-causal) generative models (such as Boltzmann machines) are also possible. Program induction: Constructing a program that computes some desired function, where that function is typically speciï¬ ed by training data consisting of example input- output pairs. In the case of probabilistic programs, which specify candidate generative models for data, an abstract description language is used to deï¬ ne a set of allowable programs and learning is a search for the programs likely to have generated the data. | 1604.00289#10 | 1604.00289#12 | 1604.00289 | [
"1511.06114"
] |
1604.00289#12 | Building Machines That Learn and Think Like People | 4 most exciting recent progress has been in new forms of probabilistic machine learning (Ghahra- mani, 2015). For example, researchers have developed automated statistical reasoning techniques (Lloyd, Duvenaud, Grosse, Tenenbaum, & Ghahramani, 2014), automated techniques for model building and selection (Grosse, Salakhutdinov, Freeman, & Tenenbaum, 2012), and probabilistic programming languages (e.g., Gelman, Lee, & Guo, 2015; Goodman, Mansinghka, Roy, Bonawitz, & Tenenbaum, 2008; Mansinghka, Selsam, & Perov, 2014). We believe that these approaches will play important roles in future AI systems, and they are at least as compatible with the ideas from cognitive science we discuss here, but a full discussion of those connections is beyond the scope of the current article. # 1.2 Overview of the key ideas The central goal of this paper is to propose a set of core ingredients for building more human-like learning and thinking machines. We will elaborate on each of these ingredients and topics in Section 4, but here we brieï¬ y overview the key ideas. The ï¬ rst set of ingredients focuses on developmental â start-up software,â or cognitive capabilities If an present early in development. There are several reasons for this focus on development. ingredient is present early in development, it is certainly active and available well before a child or adult would attempt to learn the types of tasks discussed in this paper. This is true regardless of whether the early-present ingredient is itself learned from experience or innately present. Also, the earlier an ingredient is present, the more likely it is to be foundational to later development and learning. We focus on two pieces of developmental start-up software (see Wellman & Gelman, 1992, for a review of both). First is intuitive physics (Section 4.1.1): Infants have primitive object concepts that allow them to track objects over time and allow them to discount physically implausible trajectories. For example, infants know that objects will persist over time and that they are solid and coherent. Equipped with these general principles, people can learn more quickly and make more accurate predictions. While a task may be new, physics still works the same way. | 1604.00289#11 | 1604.00289#13 | 1604.00289 | [
"1511.06114"
] |
1604.00289#13 | Building Machines That Learn and Think Like People | A second type of software present in early development is intuitive psychology (Section 4.1.2): Infants understand that other people have mental states like goals and beliefs, and this understanding strongly constrains their learning and predictions. A child watching an expert play a new video game can infer that the avatar has agency and is trying to seek reward while avoiding punishment. This inference immediately constrains other inferences, allowing the child to infer what objects are good and what objects are bad. These types of inferences further accelerate the learning of new tasks. Our second set of ingredients focus on learning. While there are many perspectives on learning, we see model building as the hallmark of human-level learning, or explaining observed data through the construction of causal models of the world (Section 4.2.2). Under this perspective, the early- present capacities for intuitive physics and psychology are also causal models of the world. A primary job of learning is to extend and enrich these models, and to build analogous causally structured theories of other domains. Compared to state-of-the-art algorithms in machine learning, human learning is distinguished by its | 1604.00289#12 | 1604.00289#14 | 1604.00289 | [
"1511.06114"
] |
1604.00289#14 | Building Machines That Learn and Think Like People | 5 richness and its eï¬ ciency. Children come with the ability and the desire to uncover the underlying causes of sparsely observed events and to use that knowledge to go far beyond the paucity of the data. It might seem paradoxical that people are capable of learning these richly structured models from very limited amounts of experience. We suggest that compositionality and learning-to- learn are ingredients that make this type of rapid model learning possible (Sections 4.2.1 and 4.2.3, respectively). | 1604.00289#13 | 1604.00289#15 | 1604.00289 | [
"1511.06114"
] |
1604.00289#15 | Building Machines That Learn and Think Like People | A ï¬ nal set of ingredients concerns how the rich models our minds build are put into action, in real time (Section 4.3). It is remarkable how fast we are to perceive and to act. People can comprehend a novel scene in a fraction of a second, and or a novel utterance in little more than the time it takes to say it and hear it. An important motivation for using neural networks in machine vision and speech systems is to respond as quickly as the brain does. Although neural networks are usually aiming at pattern recognition rather than model-building, we will discuss ways in which these â model-freeâ methods can accelerate slow model-based inferences in perception and cognition (Section 4.3.1). By learning to recognize patterns in these inferences, the outputs of inference can be predicted without having to go through costly intermediate steps. Integrating neural networks that â learn to do inferenceâ with rich model-building learning mechanisms oï¬ ers a promising way to explain how human minds can understand the world so well, so quickly. We will also discuss the integration of model-based and model-free methods in reinforcement learn- ing (Section 4.3.2), an area that has seen rapid recent progress. Once a causal model of a task has been learned, humans can use the model to plan action sequences that maximize future reward; when rewards are used as the metric for successs in model-building, this is known as model-based reinforcement learning. However, planning in complex models is cumbersome and slow, making the speed-accuracy trade-oï¬ unfavorable for real-time control. By contrast, model-free reinforce- ment learning algorithms, such as current instantiations of deep reinforcement learning, support fast control but at the cost of inï¬ exibility and possibly accuracy. We will review evidence that humans combine model-based and model-free learning algorithms both competitively and cooper- atively, and that these interactions are supervised by metacognitive processes. The sophistication of human-like reinforcement learning has yet to be realized in AI systems, but this is an area where crosstalk between cognitive and engineering approaches is especially promising. # 2 Cognitive and neural inspiration in artiï¬ cial intelligence The questions of whether and how AI should relate to human cognitive psychology are older than the terms â artiï¬ cial intelligenceâ and â cognitive psychology.â | 1604.00289#14 | 1604.00289#16 | 1604.00289 | [
"1511.06114"
] |
1604.00289#16 | Building Machines That Learn and Think Like People | Alan Turing suspected that it is easier to build and educate a child-machine than try to fully capture adult human cognition (Turing, 1950). Turing pictured the childâ s mind as a notebook with â rather little mechanism and lots of blank sheets,â and the mind of a child-machine as ï¬ lling in the notebook by responding to rewards and punishments, similar to reinforcement learning. This view on representation and learning echoes behaviorism, a dominant psychological tradition in Turingâ s time. It also echoes the strong empiricism of modern connectionist models, the idea that we can learn almost everything we know from the statistical patterns of sensory inputs. Cognitive science repudiated the over-simpliï¬ ed behaviorist view and came to play a central role | 1604.00289#15 | 1604.00289#17 | 1604.00289 | [
"1511.06114"
] |
1604.00289#17 | Building Machines That Learn and Think Like People | 6 in early AI research (Boden, 2006). Newell and Simon (1961) developed their â General Problem Solverâ as both an AI algorithm and a model of human problem solving, which they subsequently tested experimentally (Newell & Simon, 1972). AI pioneers in other areas of research explicitly referenced human cognition, and even published papers in cognitive psychology journals (e.g., Bobrow & Winograd, 1977; Hayes-Roth & Hayes-Roth, 1979; Winograd, 1972). For example, Schank (1972), writing in the journal Cognitive Psychology, declared that We hope to be able to build a program that can learn, as a child does, how to do what we have described in this paper instead of being spoon-fed the tremendous information necessary. A similar sentiment was expressed by Minsky (1974): I draw no boundary between a theory of human thinking and a scheme for making an intelligent machine; no purpose would be served by separating these today since neither domain has theories good enough to explainâ or to produceâ enough mental capacity. | 1604.00289#16 | 1604.00289#18 | 1604.00289 | [
"1511.06114"
] |
1604.00289#18 | Building Machines That Learn and Think Like People | Much of this research assumed that human knowledge representation is symbolic and that reasoning, language, planning and vision could be understood in terms of symbolic operations. Parallel to these developments, a radically diï¬ erent approach was being explored, based on neuron-like â sub- symbolicâ computations (e.g., Fukushima, 1980; Grossberg, 1976; Rosenblatt, 1958). The representations and algorithms used by this approach were more directly inspired by neuroscience than by cognitive psychology, although ultimately it would ï¬ ower into an inï¬ uential school of thought about the nature of cognitionâ parallel distributed processing (PDP) (McClelland et al., 1986; Rumelhart, McClelland, & the PDP Research Group, 1986). As its name suggests, PDP emphasizes parallel computation by combining simple units to collectively implement sophisticated computations. The knowledge learned by these neural networks is thus distributed across the collection of units rather than localized as in most symbolic data structures. The resurgence of recent interest in neural networks, more commonly referred to as â deep learning,â share the same representational commitments and often even the same learning algorithms as the earlier PDP models. â Deepâ refers to the fact that more powerful models can be built by composing many layers of representation (see LeCun et al., 2015; Schmidhuber, 2015, for recent reviews), still very much in the PDP style while utilizing recent advances in hardware and computing capabilities, as well as massive datasets, to learn deeper models. It is also important to clarify that the PDP perspective is compatible with â model buildingâ in addition to â pattern recognition.â Some of the original work done under the banner of PDP (Rumelhart, McClelland, & the PDP Research Group, 1986) is closer to model building than pattern recognition, whereas the recent large-scale discriminative deep learning systems more purely for a related discussion). But, as discussed, exemplify pattern recognition (see Bottou, 2014, there is also a question of the nature of the learned representations within the model â their form, compositionality, and transferability â and the developmental start-up software that was used to get there. We focus on these issues in this paper. Neural network models and the PDP approach oï¬ | 1604.00289#17 | 1604.00289#19 | 1604.00289 | [
"1511.06114"
] |
1604.00289#19 | Building Machines That Learn and Think Like People | er a view of the mind (and intelligence more broadly) that is sub-symbolic and often populated with minimal constraints and inductive biases 7 to guide learning. Proponents of this approach maintain that many classic types of structured knowledge, such as graphs, grammars, rules, objects, structural descriptions, programs, etc. can be useful yet misleading metaphors for characterizing thought. These structures are more epiphenom- enal than real, emergent properties of more fundamental sub-symbolic cognitive processes (McClel- land et al., 2010). Compared to other paradigms for studying cognition, this position on the nature of representation is often accompanied by a relatively â blank slateâ vision of initial knowledge and representation, much like Turingâ s blank notebook. When attempting to understand a particular cognitive ability or phenomenon within this paradigm, a common scientiï¬ c strategy is to train a relatively generic neural network to perform the task, adding additional ingredients only when necessary. This approach has shown that neural networks can behave as if they learned explicitly structured knowledge, such as a rule for producing the past tense of words (Rumelhart & McClelland, 1986), rules for solving simple balance-beam physics problems (McClelland, 1988), or a tree to represent types of living things (plants and animals) and their distribution of properties (Rogers & McClelland, 2004). Training large-scale relatively generic networks is also the best current approach for object recognition (He et al., 2015; Krizhevsky et al., 2012; Russakovsky et al., 2015; Szegedy et al., 2014), where the high-level feature representations of these convolutional nets have also been used to predict patterns of neural response in human and macaque IT cortex (Khaligh-Razavi & Kriegeskorte, 2014; Kriegeskorte, 2015; Yamins et al., 2014) as well as human typicality ratings (Lake, Zaremba, Fergus, & Gureckis, 2015) and similarity ratings (Peterson, Abbott, & Griï¬ ths, 2016) for images of common objects. | 1604.00289#18 | 1604.00289#20 | 1604.00289 | [
"1511.06114"
] |
1604.00289#20 | Building Machines That Learn and Think Like People | Moreover, researchers have trained generic networks to perform structured and even strategic tasks, such as the recent work on using a Deep Q-learning Network (DQN) to play simple video games (V. Mnih et al., 2015). If neural networks have such broad application in machine vision, language, and control, and if they can be trained to emulate the rule-like and structured behaviors that characterize cognition, do we need more to develop truly human-like learning and thinking machines? How far can relatively generic neural networks bring us towards this goal? # 3 Challenges for building more human-like machines While cognitive science has not yet converged on a single account of the mind or intelligence, the claim that a mind is a collection of general purpose neural networks with few initial constraints is rather extreme in contemporary cognitive science. | 1604.00289#19 | 1604.00289#21 | 1604.00289 | [
"1511.06114"
] |
1604.00289#21 | Building Machines That Learn and Think Like People | A diï¬ erent picture has emerged that highlights the importance of early inductive biases, including core concepts such as number, space, agency and objects, as well as powerful learning algorithms that rely on prior knowledge to extract knowledge from small amounts of training data. This knowledge is often richly organized and theory-like in structure, capable of the graded inferences and productive capacities characteristic of human thought. Here we present two challenge problems for machine learning and AI: learning simple visual concepts (Lake, Salakhutdinov, & Tenenbaum, 2015) and learning to play the Atari game Frostbite (V. Mnih et al., 2015). We also use the problems as running examples to illustrate the importance of core cognitive ingredients in the sections that follow. | 1604.00289#20 | 1604.00289#22 | 1604.00289 | [
"1511.06114"
] |
1604.00289#22 | Building Machines That Learn and Think Like People | 8 # 3.1 The Characters Challenge The ï¬ rst challenge concerns handwritten character recognition, a classic problem for comparing diï¬ erent types of machine learning algorithms. Hofstadter (1985) argued that the problem of recognizing characters in all the ways people do â both handwritten and printed â contains most if not all of the fundamental challenges of AI. Whether or not this statement is right, it highlights the surprising complexity that underlies even â simpleâ human-level concepts like letters. More practically, handwritten character recognition is a real problem that children and adults must learn to solve, with practical applications ranging from reading envelope addresses or checks in an ATM machine. Handwritten character recognition is also simpler than more general forms of object recognition â the object of interest is two-dimensional, separated from the background, and usually unoccluded. Compared to how people learn and see other types of objects, it seems possible, in the near term, to build algorithms that can see most of the structure in characters that people can see. The standard benchmark is the MNIST data set for digit recognition, which involves classifying images of digits into the categories â | 1604.00289#21 | 1604.00289#23 | 1604.00289 | [
"1511.06114"
] |
1604.00289#23 | Building Machines That Learn and Think Like People | 0â -â 9â (LeCun, Bottou, Bengio, & Haï¬ ner, 1998). The training set provides 6,000 images per class for a total of 60,000 training images. With a large amount of training data available, many algorithms achieve respectable performance, including K-nearest neighbors (5% test error), support vector machines (about 1% test error), and convolutional neu- ral networks (below 1% test error; LeCun et al., 1998). The best results achieved using deep convolutional nets are very close to human-level performance at an error rate of 0.2% (Ciresan, Meier, & Schmidhuber, 2012). Similarly, recent results applying convolutional nets to the far more challenging ImageNet object recognition benchmark have shown that human-level performance is within reach on that data set as well (Russakovsky et al., 2015). While humans and neural networks may perform equally well on the MNIST digit recognition task and other large-scale image classiï¬ cation tasks, it does not mean that they learn and think in the same way. There are at least two important diï¬ erences: people learn from fewer examples and they learn richer representations, a comparison true for both learning handwritten characters as well as learning more general classes of objects (Figure 1). People can learn to recognize a new handwritten character from a single example (Figure 1A-i), allowing them to discriminate between novel instances drawn by other people and similar looking non-instances (Lake, Salakhutdinov, & Tenenbaum, 2015; E. | 1604.00289#22 | 1604.00289#24 | 1604.00289 | [
"1511.06114"
] |
1604.00289#24 | Building Machines That Learn and Think Like People | G. Miller, Matsakis, & Viola, 2000). Moreover, people learn more than how to do pattern recognition: they learn a concept â that is, a model of the class that allows their acquired knowledge to be ï¬ exibly applied in new ways. In addition to recognizing new examples, people can also generate new examples (Figure 1A-ii), parse a character into its most important parts and relations (Figure 1A-iii; Lake, Salakhutdinov, and Tenenbaum (2012)), and generate new characters given a small set of related characters (Figure 1A-iv). These additional abilities come for free along with the acquisition of the underlying concept. Even for these simple visual concepts, people are still better and more sophisticated learners than the best algorithms for character recognition. People learn a lot more from a lot less, and cap- turing these human-level learning abilities in machines is the Characters Challenge. We recently reported progress on this challenge using probabilistic program induction (Lake, Salakhutdinov, & Tenenbaum, 2015), yet aspects of the full human cognitive ability remain out of reach. While both people and model represent characters as a sequence of pen strokes and relations, people have | 1604.00289#23 | 1604.00289#25 | 1604.00289 | [
"1511.06114"
] |
1604.00289#25 | Building Machines That Learn and Think Like People | 9 A i i B) " wo led |°9 leu led |Z5 3 - on ed Saal es ® Oe HO i) a iv) a9 |B) J) Ye ii) iv) ates fj DY} BD /7B led a VIB % Figure 1: The characters challenge: human-level learning of a novel handwritten characters (A), with the same abilities also illustrated for a novel two-wheeled vehicle (B). A single example of a new visual concept (red box) can be enough information to support the (i) classiï¬ cation of new examples, (ii) generation of new examples, (iii) parsing an object into parts and relations, and (iv) generation of new concepts from related concepts. Adapted from Lake, Salakhutdinov, and Tenenbaum (2015). a far richer repertoire of structural relations between strokes. Furthermore, people can eï¬ ciently integrate across multiple examples of a character to infer which have optional elements, such as the horizontal cross-bar in â 7â s, combining diï¬ erent variants of the same character into a single co- herent representation. Additional progress may come by combining deep learning and probabilistic program induction to tackle even richer versions of the Characters Challenge. # 3.2 The Frostbite Challenge The second challenge concerns the Atari game Frostbite (Figure 2), which was one of the control problems tackled by the DQN of V. | 1604.00289#24 | 1604.00289#26 | 1604.00289 | [
"1511.06114"
] |
1604.00289#26 | Building Machines That Learn and Think Like People | Mnih et al. (2015). The DQN was a signiï¬ cant advance in reinforcement learning, showing that a single algorithm can learn to play a wide variety of complex tasks. The network was trained to play 49 classic Atari games, proposed as a test domain for reinforcement learning (Bellemare, Naddaf, Veness, & Bowling, 2013), impressively achieving human-level performance or above on 29 of the games. It did, however, have particular trouble with Frostbite and other games that required temporally extended planning strategies. In Frostbite, players control an agent (Frostbite Bailey) tasked with constructing an igloo within a time limit. The igloo is built piece-by-piece as the agent jumps on ice ï¬ | 1604.00289#25 | 1604.00289#27 | 1604.00289 | [
"1511.06114"
] |
1604.00289#27 | Building Machines That Learn and Think Like People | oes in water (Figure 2A-C). The challenge is that the ice ï¬ oes are in constant motion (moving either left or right), and ice ï¬ oes only contribute to the construction of the igloo if they are visited in an active state (white rather than blue). The agent may also earn extra points by gathering ï¬ sh while avoiding a number of fatal hazards (falling in the water, snow geese, polar bears, etc.). Success in this game requires a | 1604.00289#26 | 1604.00289#28 | 1604.00289 | [
"1511.06114"
] |
1604.00289#28 | Building Machines That Learn and Think Like People | 10 Figure 2: Screenshots of Frostbite, a 1983 video game designed for the Atari game console. A) The start of a level in Frostbite. The agent must construct an igloo by hopping between ice ï¬ oes and avoiding obstacles such as birds. The ï¬ oes are in constant motion (either left or right), making multi-step planning essential to success. B) The agent receives pieces of the igloo (top right) by jumping on the active ice ï¬ oes (white), which then deactivates them (blue). C) At the end of a level, the agent must safely reach the completed igloo. D) Later levels include additional rewards (ï¬ sh) and deadly obstacles (crabs, clams, and bears). temporally extended plan to ensure the agent can accomplish a sub-goal (such as reaching an ice ï¬ oe) and then safely proceed to the next sub-goal. Ultimately, once all of the pieces of the igloo are in place, the agent must proceed to the igloo and thus complete the level before time expires (Figure 2C). The DQN learns to play Frostbite and other Atari games by combining a powerful pattern recognizer (a deep convolutional neural network) and a simple model-free reinforcement learning algorithm (Q-learning; Watkins & Dayan, 1992). These components allow the network to map sensory inputs (frames of pixels) onto a policy over a small set of actions, and both the mapping and the policy are trained to optimize long-term cumulative reward (the game score). The network embodies the strongly empiricist approach characteristic of most connectionist models: very little is built into the network apart from the assumptions about image structure inherent in convolutional networks, so the network has to essentially learn a visual and conceptual system from scratch for each new game. | 1604.00289#27 | 1604.00289#29 | 1604.00289 | [
"1511.06114"
] |
1604.00289#29 | Building Machines That Learn and Think Like People | In V. Mnih et al. (2015), the network architecture and hyper-parameters were ï¬ xed, but 11 the network was trained anew for each game, meaning the visual system and the policy are highly specialized for the games it was trained on. More recent work has shown how these game-speciï¬ c networks can share visual features (Rusu et al., 2016) or be used to train a multi-task network (Parisotto, Ba, & Salakhutdinov, 2016), achieving modest beneï¬ ts of transfer when learning to play new games. Although it is interesting that the DQN learns to play games at human-level performance while assuming very little prior knowledge, the DQN may be learning to play Frostbite and other games in a very diï¬ | 1604.00289#28 | 1604.00289#30 | 1604.00289 | [
"1511.06114"
] |
1604.00289#30 | Building Machines That Learn and Think Like People | erent way than people do. One way to examine the diï¬ erences is by considering the amount of experience required for learning. In V. Mnih et al. (2015), the DQN was compared with a professional gamer who received approximately two hours of practice on each of the 49 Atari games (although he or she likely had prior experience with some of the games). The DQN was trained on 200 million frames from each of the games, which equates to approximately 924 hours of game time (about 38 days), or almost 500 times as much experience as the human received.2 Additionally, the DQN incorporates experience replay, where each of these frames is replayed approximately 8 more times on average over the course of learning. With the full 924 hours of unique experience and additional replay, the DQN achieved less than 10% of human-level performance during a controlled test session (see DQN in Fig. 3). More recent variants of the DQN have demonstrated superior performance (Schaul et al., 2016; Stadie et al., 2016; van Hasselt, Guez, & Silver, 2016; Wang et al., 2016), reaching 83% of the professional gamerâ s score by incorporating smarter experience replay (Schaul et al., 2016) and 96% by using smarter replay and more eï¬ cient parameter sharing (Wang et al., 2016) (see DQN+ and DQN++ in Fig. 3).3 But they requires a lot of experience to reach this level: the learning curve provided in Schaul et al. (2016) shows performance is around 46% after 231 hours, 19% after 116 hours, and below 3.5% after just 2 hours (which is close to random play, approximately 1.5%). | 1604.00289#29 | 1604.00289#31 | 1604.00289 | [
"1511.06114"
] |
1604.00289#31 | Building Machines That Learn and Think Like People | The diï¬ erences between the human and machine learning curves suggest that they may be learning diï¬ erent kinds of knowledge, using diï¬ erent learning mechanisms, or both. The contrast becomes even more dramatic if we look at the very earliest stages of learning. While both the original DQN and these more recent variants require multiple hours of experience to perform reliably better than random play, even non-professional humans can grasp the basics of the game after just a few minutes of play. We speculate that people do this by inferring a general schema to describe the goals of the game and the object types and their interactions, using the kinds of intuitive theories, model-building abilities and model-based planning mecha- nisms we describe below. While novice players may make some mistakes, such as inferring that ï¬ sh are harmful rather than helpful, they can learn to play better than chance within a few min- utes. If humans are able to ï¬ rst watch an expert playing for a few minutes, they can learn even faster. In informal experiments with two of the authors playing Frostbite on a Javascript emu- lator (http://www.virtualatari.org/soft.php?soft=Frostbite), after watching videos of expert play on YouTube for just two minutes, we found that we were able to reach scores comparable to or 2The time required to train the DQN (compute time) is not the same as the game (experience) time. Compute time can be longer. | 1604.00289#30 | 1604.00289#32 | 1604.00289 | [
"1511.06114"
] |
1604.00289#32 | Building Machines That Learn and Think Like People | 3The reported scores use the â human startsâ measure of test performance, designed to prevent networks from just memorizing long sequences of successful actions from a single starting point. Both faster learning (Blundell et al., 2016) and higher scores (Wang et al., 2016) have been reported using other metrics, but it is unclear how well the networks are generalizing with these alternative metrics. 12 5000 4000} 3000 2000 Frostbite Score 1000 2 116 231 346 462 578 693 808 924 Amount of game experience (in hours) Figure 3: | 1604.00289#31 | 1604.00289#33 | 1604.00289 | [
"1511.06114"
] |
1604.00289#33 | Building Machines That Learn and Think Like People | Comparing learning speed for people versus Deep Q-Networks (DQNs). Test performance on the Atari 2600 game â Frostbiteâ is plotted as a function of game experience (in hours at a frame rate of 60 fps), which does not include additional experience replay. Learning curves (if available) and scores are shown from diï¬ erent networks: DQN (V. Mnih et al., 2015), DQN+ (Schaul et al., 2016), and DQN++ (Wang et al., 2016). Random play achieves a score of 66.4. | 1604.00289#32 | 1604.00289#34 | 1604.00289 | [
"1511.06114"
] |
1604.00289#34 | Building Machines That Learn and Think Like People | The â human startsâ performance measure is used (van Hasselt et al., 2016). better than the human expert reported in V. Mnih et al. (2015) after at most 15-20 minutes of total practice.4 There are other behavioral signatures that suggest fundamental diï¬ erences in representation and learning between people and the DQN. For instance, the game of Frostbite provides incremental rewards for reaching each active ice ï¬ oe, providing the DQN with the relevant sub-goals for com- pleting the larger task of building an igloo. Without these sub-goals, the DQN would have to take random actions until it accidentally builds an igloo and is rewarded for completing the entire level. In contrast, people likely do not rely on incremental scoring in the same way when ï¬ guring out how to play a new game. In Frostbite, it is possible to ï¬ gure out the higher-level goal of building an igloo without incremental feedback; similarly, sparse feedback is a source of diï¬ culty in other Atari 2600 games such as Montezumaâ s Revenge where people substantially outperform current DQN approaches. The learned DQN network is also rather inï¬ exible to changes in its inputs and goals: changing the color or appearance of objects or changing the goals of the network would have devastating consequences on performance if the network is not retrained. While any speciï¬ c model is necessarily 4More precisely, the human expert in V. Mnih et al. (2015) scored an average of 4335 points across 30 game sessions of up to ï¬ ve minutes of play. In individual sessions lasting no longer than ï¬ ve minutes, author TDU obtained scores of 3520 points after approximately 5 minutes of gameplay, 3510 points after 10 minutes, and 7810 points after 15 minutes. Author JBT obtained 4060 after approximately 5 minutes of gameplay, 4920 after 10-15 minutes, and 6710 after no more than 20 minutes. TDU and JBT each watched approximately two minutes of expert play on YouTube (e.g., https://www.youtube.com/watch?v=ZpUFztf9Fjc, but there are many similar examples that can be found in a YouTube search). | 1604.00289#33 | 1604.00289#35 | 1604.00289 | [
"1511.06114"
] |
1604.00289#35 | Building Machines That Learn and Think Like People | 13 simpliï¬ ed and should not be held to the standard of general human intelligence, the contrast between DQN and human ï¬ exibility is striking nonetheless. For example, imagine you are tasked with playing Frostbite with any one of these new goals: Get the lowest possible score. Get closest to 100, or 300, or 1000, or 3000, or any level, without going over. Beat your friend, whoâ s playing next to you, but just barely, not by too much, so as not to embarrass them. Go as long as you can without dying. Die as quickly as you can. Pass each level at the last possible minute, right before the temperature timer hits zero and you die (i.e., come as close as you can to dying from frostbite without actually dying). Get to the furthest unexplored level without regard for your score. See if you can discover secret Easter eggs. Get as many ï¬ sh as you can. Touch all the individual ice ï¬ oes on screen once and only once. Teach your friend how to play as eï¬ ciently as possible. | 1604.00289#34 | 1604.00289#36 | 1604.00289 | [
"1511.06114"
] |
1604.00289#36 | Building Machines That Learn and Think Like People | This range of goals highlights an essential component of human intelligence: people can learn models and use them for arbitrary new tasks and goals. While neural networks can learn multiple mappings or tasks with the same set of stimuli â adapting their outputs depending on a speciï¬ ed goal â these models require substantial training or reconï¬ guration to add new tasks (e.g., Collins & Frank, 2013; Eliasmith et al., 2012; Rougier, Noelle, Braver, Cohen, & Oâ Reilly, 2005). In contrast, people require little or no retraining or reconï¬ guration, adding new tasks and goals to their repertoire with relative ease. | 1604.00289#35 | 1604.00289#37 | 1604.00289 | [
"1511.06114"
] |
1604.00289#37 | Building Machines That Learn and Think Like People | The Frostbite example is a particularly telling contrast when compared with human play. Even the best deep networks learn gradually over many thousands of game episodes, take a long time to reach good performance and are locked into particular input and goal patterns. Humans, after playing just a small number of games over a span of minutes, can understand the game and its goals well enough to perform better than deep networks do after almost a thousand hours of experience. Even more impressively, people understand enough to invent or accept new goals, generalize over changes to the input, and explain the game to others. | 1604.00289#36 | 1604.00289#38 | 1604.00289 | [
"1511.06114"
] |
1604.00289#38 | Building Machines That Learn and Think Like People | Why are people diï¬ erent? What core ingredients of human intelligence might the DQN and other modern machine learning methods be missing? One might object that both the Frostbite and Characters challenges draw an unfair comparison between the speed of human learning and neural network learning. We discuss this objection in detail in Section 5, but we feel it is important to anticipate here as well. To paraphrase one reviewer of an earlier draft of this article, â It is not that DQN and people are solving the same task | 1604.00289#37 | 1604.00289#39 | 1604.00289 | [
"1511.06114"
] |
1604.00289#39 | Building Machines That Learn and Think Like People | 14 diï¬ erently. They may be better seen as solving diï¬ erent tasks. Human learners â unlike DQN and many other deep learning systems â approach new problems armed with extensive prior experience. The human is encountering one in a years-long string of problems, with rich overlapping structure. Humans as a result often have important domain-speciï¬ c knowledge for these tasks, even before they â begin.â The DQN is starting completely from scratch.â We agree, and indeed this is another way of putting our point here. Human learners fundamentally take on diï¬ erent learning tasks than todayâ s neural networks, and if we want to build machines that learn and think like people, our machines need to confront the kinds of tasks that human learners do, not shy away from them. People never start completely from scratch, or even close to â from scratch,â | 1604.00289#38 | 1604.00289#40 | 1604.00289 | [
"1511.06114"
] |
1604.00289#40 | Building Machines That Learn and Think Like People | and that is the secret to their success. The challenge of building models of human learning and thinking then becomes: How do we bring to bear rich prior knowledge to learn new tasks and solve new problems so quickly? What form does that prior knowledge take, and how is it constructed, from some combination of inbuilt capacities and previous experience? The core ingredients we propose in the next section oï¬ er one route to meeting this challenge. # 4 Core ingredients of human intelligence In the Introduction, we laid out what we see as core ingredients of intelligence. Here we consider the ingredients in detail and contrast them with the current state of neural network modeling. While these are hardly the only ingredients needed for human-like learning and thought (see our discussion of language in Section 5), they are key building blocks which are not present in most current learning-based AI systems â | 1604.00289#39 | 1604.00289#41 | 1604.00289 | [
"1511.06114"
] |
1604.00289#41 | Building Machines That Learn and Think Like People | certainly not all present together â and for which additional attention may prove especially fruitful. We believe that integrating them will produce signiï¬ cantly more powerful and more human-like learning and thinking abilities than we currently see in AI systems. Before considering each ingredient in detail, it is important to clarify that by â core ingredientâ we do not necessarily mean an ingredient that is innately speciï¬ ed by genetics or must be â built inâ to any learning algorithm. We intend our discussion to be agnostic with regards to the origins of the key ingredients. By the time a child or an adult is picking up a new character or learning how to play Frostbite, they are armed with extensive real world experience that deep learning systems do not beneï¬ | 1604.00289#40 | 1604.00289#42 | 1604.00289 | [
"1511.06114"
] |
1604.00289#42 | Building Machines That Learn and Think Like People | t from â experience that would be hard to emulate in any general sense. Certainly, the core ingredients are enriched by this experience, and some may even be a product of the experience itself. Whether learned, built in, or enriched, the key claim is that these ingredients play an active and important role in producing human-like learning and thought, in ways contemporary machine learning has yet to capture. # 4.1 Developmental start-up software Early in development, humans have a foundational understanding of several core domains (Spelke, 2003; 2007). These domains include number (numerical and set opera- tions), space (geometry and navigation), physics (inanimate objects and mechanics) and psychology (agents and groups). These core domains cleave cognition at its conceptual joints, and each domain | 1604.00289#41 | 1604.00289#43 | 1604.00289 | [
"1511.06114"
] |
1604.00289#43 | Building Machines That Learn and Think Like People | 15 is organized by a set of entities and abstract principles relating the entities. The underlying cogni- tive representations can be understood as â intuitive theories,â with a causal structure resembling a scientiï¬ c theory (Carey, 2004, 2009; Gopnik et al., 2004; Gopnik & Meltzoï¬ , 1999; Gweon, Tenenbaum, & Schulz, 2010; L. Schulz, 2012; Wellman & Gelman, 1992, 1998). | 1604.00289#42 | 1604.00289#44 | 1604.00289 | [
"1511.06114"
] |
1604.00289#44 | Building Machines That Learn and Think Like People | The â child as scientistâ proposal further views the process of learning itself as also scientist-like, with recent experiments showing that children seek out new data to distinguish between hypotheses, isolate vari- ables, test causal hypotheses, make use of the data-generating process in drawing conclusions, and learn selectively from others (Cook, Goodman, & Schulz, 2011; Gweon et al., 2010; L. E. Schulz, Gopnik, & Glymour, 2007; Stahl & Feigenson, 2015; Tsividis, Gershman, Tenenbaum, & Schulz, 2013). We will address the nature of learning mechanisms in Section 4.2. Each core domain has been the target of a great deal of study and analysis, and together the domains are thought to be shared cross-culturally and partly with non-human animals. All of these domains may be important augmentations to current machine learning, though below we focus in particular on the early understanding of objects and agents. # 4.1.1 Intuitive physics Young children have rich knowledge of intuitive physics. Whether learned or innate, important physical concepts are present at ages far earlier than when a child or adult learns to play Frostbite, suggesting these resources may be used for solving this and many everyday physics-related tasks. At the age of 2 months and possibly earlier, human infants expect inanimate objects to follow principles of persistence, continuity, cohesion and solidity. Young infants believe objects should move along smooth paths, not wink in and out of existence, not inter-penetrate and not act at a distance (Spelke, 1990; Spelke, Gutheil, & Van de Walle, 1995). These expectations guide object segmentation in early infancy, emerging before appearance-based cues such as color, texture, and perceptual goodness (Spelke, 1990). These expectations also go on to guide later learning. At around 6 months, infants have already developed diï¬ erent expectations for rigid bodies, soft bodies and liquids (Rips & Hespos, 2015). Liquids, for example, are expected to go through barriers, while solid objects cannot (Hespos, Ferry, & Rips, 2009). | 1604.00289#43 | 1604.00289#45 | 1604.00289 | [
"1511.06114"
] |
1604.00289#45 | Building Machines That Learn and Think Like People | By their ï¬ rst birthday, infants have gone through several transitions of compre- hending basic physical concepts such as inertia, support, containment and collisions (Baillargeon, 2004; Baillargeon, Li, Ng, & Yuan, 2009; Hespos & Baillargeon, 2008). There is no single agreed-upon computational account of these early physical principles and con- cepts, and previous suggestions have ranged from decision trees (Baillargeon et al., 2009), to cues, to lists of rules (Siegler & Chen, 1998). A promising recent approach sees intuitive physical rea- soning as similar to inference over a physics software engine, the kind of simulators that power modern-day animations and games (Bates, Yildirim, Tenenbaum, & Battaglia, 2015; Battaglia, Hamrick, & Tenenbaum, 2013; Gerstenberg, Goodman, Lagnado, & Tenenbaum, 2015; Sanborn, Mansinghka, & Griï¬ | 1604.00289#44 | 1604.00289#46 | 1604.00289 | [
"1511.06114"
] |
1604.00289#46 | Building Machines That Learn and Think Like People | ths, 2013). According to this hypothesis, people reconstruct a perceptual scene using internal representations of the objects and their physically relevant properties (such as mass, elasticity, and surface friction), and forces acting on objects (such as gravity, friction, or collision impulses). Relative to physical ground truth, the intuitive physical state representation 16 A= 1. Inputs === 2. Intuitive Physics Engine = 3. Outputs B =r" Changes to Input one | Will it fall? Which direction? Add blocks, blocks made of styrofoam, blocks made of lead, blocks made of goo, table is made of rubber, table is actually quicksand, pour water on the tower, pour honey on the tower, blue blocks are glued together, red blocks are magnetic, gravity is reversed, wind blows over table, table has slippery ice on top... Figure 4: The intuitive physics-engine approach to scene understanding, illustrated through tower stability. (A) The engine takes in inputs through perception, language, memory and other faculties. It then constructs a physical scene with objects, physical properties and forces, simulates the sceneâ s development over time and hands the output to other reasoning systems. (B) Many possible â tweaksâ to the input can result in much diï¬ erent scenes, requiring the potential discovery, training and evaluation of new features for each tweak. | 1604.00289#45 | 1604.00289#47 | 1604.00289 | [
"1511.06114"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.