id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1702.04595#32 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | Red are positive values, and blue negative. For each slice, the left image shows the original image, overlaid with the relevance values. The right image shows the original image with reversed colors and the relevance values. Relevance values are shown only for voxels with (absolute) relevance value above 15% of the (absolute) maximum value. Class: HEALTHY Class: HIV fh | | as) as as] input Pred. Diff. Figure 10: Prediction difference visualization for different samples. The ï¬ rst four samples are of the class â healthyâ ; the last four of the class â HIVâ . | 1702.04595#31 | 1702.04595#33 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#33 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | All images show slice 39 (along the ï¬ rst axis). All samples are correctly classiï¬ ed, and the results show evidence for (red) and against (blue) this decision. Prediction differences are shown only for voxels with (absolute) relevance value above 15% of the (absolute) maximum value. 29 31 33 35 37 39 + Yee Figure 11: Visualization results across different slices of the MRI image, using the same input image as shown in 9. Prediction differences are shown only for voxels with (absolute) relevance value above 15% of the (absolute) maximum value. | 1702.04595#32 | 1702.04595#34 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#34 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | Input k=2 k=3 k=10 ~ & Figure 12: How the patch size inï¬ uences the visualization. For the input image (HIV sample, slice 39 along the ï¬ rst axis) we show the visualization with different patch sizes (k in alg. 1). Prediction differences are shown only for voxels with (absolute) relevance value above 15% of the (absolute) maximum (for k = 2 it is 10%). 9 | 1702.04595#33 | 1702.04595#35 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#35 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | Published as a conference paper at ICLR 2017 # ACKNOWLEDGMENTS This work was supported by AWS in Education Grant award. We thank Facebook and Google for ï¬ nancial support, and our reviewers for their time and valuable, constructive feedback. This work was also in part supported by: Innoviris, the Brussels Institute for Research and Innovation, Brussels, Belgium; the Nuts-OHRA Foundation (grant no. 1003-026), Amsterdam, The Netherlands; The Netherlands Organization for Health Research and Development (ZonMW) together with AIDS Fonds (grant no 300020007 and 2009063). Additional unrestricted scientiï¬ c grants were received from Gilead Sciences, ViiV Healthcare, Janssen Pharmaceutica N.V., Bristol-Myers Squibb, Boehringer Ingelheim, and Merck&Co. We thank Barbara Elsenga, Jane Berkel, Sandra Moll, Maja Totté, and Marjolein Martens for running the AGEhIV study program and capturing our data with such care and passion. We thank Yolanda Ruijs-Tiggelman, Lia Veenenberg-Benschop, Sima Zaheri, and Mariska Hillebregt at the HIV Monitoring Foundation for their contributions to data management. We thank Aaï¬ en Henderiks and Hans-Erik Nobel for their advice on logistics and organization at the Academic Medical Center. We thank all HIV-physicians and HIV-nurses at the Academic Medical Center for their efforts to include the HIV-infected participants into the AGEhIV Cohort Study, and the Municipal Health Service Amsterdam personnel for their efforts to include the HIV-uninfected participants into the AGEhIV Cohort Study. We thank all study participants without whom this research would not be possible. AGEhIV Cohort Study Group. | 1702.04595#34 | 1702.04595#36 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#36 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | Scientiï¬ c oversight and coordination: P. Reiss (principal investigator), F.W.N.M. Wit, M. van der Valk, J. Schouten, K.W. Kooij, R.A. van Zoest, E. Verheij, B.C. Elsenga (Aca- demic Medical Center (AMC), Department of Global Health and Amsterdam Institute for Global Health and Development (AIGHD)). M. Prins (co-principal investigator), M.F. Schim van der Loeff, M. Martens, S. Moll, J. Berkel, M. Totté, G.R. Visser, L. May, S. Kovalev, A. Newsum, M. | 1702.04595#35 | 1702.04595#37 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#37 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | Dijkstra (Public Health Service of Amsterdam, Department of Infectious Diseases). Datamanagement: S. Zaheri, M.M.J. Hillebregt, Y.M.C. Ruijs, D.P. Benschop, A. el Berkaoui (HIV Monitoring Foundation). Central laboratory support: N.A. Kootstra, A.M. Harskamp-Holwerda, I. Maurer, T. Booiman, M.M. Mangas Ruiz, A.F. Girigorie, B. | 1702.04595#36 | 1702.04595#38 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#38 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | Boeser-Nunnink (AMC, Laboratory for Viral Immune Pathogenesis and Department of Experimental Immunology). Project management and administrative support: W. Zikkenheiner, F.R. Janssen (AIGHD). Participating HIV physicians and nurses: S.E. Geerlings, M.H. Godfried, A. Goorhuis, J.W.R. Hovius, J.T.M. van der Meer, F.J.B. Nellen, T. van der Poll, J.M. Prins, P. Reiss, M. van der Valk, W.J. Wiersinga, M. van Vugt, G. de Bree, F.W.N.M. Wit; J. van Eden, A.M.H. van Hes, M. Mutschelknauss , H.E. Nobel, F.J.J. Pijnappel, M. Bijsterveld, A. Weijsenfeld, S. Smalhout (AMC, Division of Infectious Diseases). Other collaborators: J. de Jong, P.G. Postema (AMC, Department of Cardiology); P.H.L.T. Bisschop, M.J.M. Serlie (AMC, Division of Endocrinology and Metabolism); P. Lips (Free University Medical Center Amsterdam); E. Dekker (AMC, Department of Gastroenterology); N. van der Velde (AMC, Division of Geriatric Medicine); J.M.R. Willemsen, L. Vogt (AMC, Division of Nephrology); J. Schouten, P. Portegies, B.A. Schmand, G.J. Geurtsen (AMC, Department of Neurology); F.D. Verbraak, N. Demirkaya (AMC, Department of Ophthalmology); I. Visser (AMC, Department of Psychiatry); A. Schadé (Free University Medical Center Amsterdam, Department of Psychiatry); P.T. Nieuwkerk, N. Langebeek (AMC, Department of Medical Psychology); R.P. van Steenwijk, E. Dijkers (AMC, Department of Pulmonary medicine); C.B.L.M. Majoie, M.W.A. Caan, T. | 1702.04595#37 | 1702.04595#39 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#39 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | Su (AMC, Department of Radiology); H.W. van Lunsen, M.A.F. Nievaard (AMC, Department of Gynaecology); B.J.H. van den Born, E.S.G. Stroes, (AMC, Division of Vascular Medicine); W.M.C. Mulder (HIV Vereniging Nederland). # REFERENCES Jesper LR Andersson, Mark Jenkinson, and Stephen Smith. Non-linear optimisation. fmrib technical report tr07ja1. University of Oxford FMRIB Centre: Oxford, UK, 2007. | 1702.04595#38 | 1702.04595#40 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#40 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Woj- ciech Samek. On pixel-wise explanations for non-linear classiï¬ er decisions by layer-wise relevance propaga- tion. PloS one, 10(7):e0130140, 2015. Christine Ecker, Andre Marquand, Janaina Mourão-Miranda, Patrick Johnston, Eileen M Daly, Michael J Brammer, Stefanos Maltezos, Clodagh M Murphy, Dene Robertson, Steven C Williams, et al. | 1702.04595#39 | 1702.04595#41 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#41 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | Describing the brain in autism in ï¬ ve dimensionsâ magnetic resonance imaging-assisted diagnosis of autism spectrum disorder using a multiparameter classiï¬ cation approach. The Journal of Neuroscience, 30(32):10612â 10623, 2010. Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-layer features of a deep network. Dept. IRO, Université de Montréal, Tech. Rep, 4323, 2009. | 1702.04595#40 | 1702.04595#42 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#42 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | Bilwaj Gaonkar and Christos Davatzikos. Analytic estimation of statistical signiï¬ cance maps for support vector machine based multi-variate image analysis and classiï¬ cation. NeuroImage, 78:270â 283, 2013. 10 Published as a conference paper at ICLR 2017 Stefan Haufe, Frank Meinecke, Kai Görgen, Sven Dähne, John-Dylan Haynes, Benjamin Blankertz, and Felix Bieà | 1702.04595#41 | 1702.04595#43 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#43 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | mann. On the interpretation of weight vectors of linear models in multivariate neuroimaging. Neuroimage, 87:96â 110, 2014. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadar- rama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. Stefan Klöppel, Cynthia M Stonnington, Carlton Chu, Bogdan Draganski, Rachael I Scahill, Jonathan D Rohrer, Nick C Fox, Clifford R Jack, John Ashburner, and Richard SJ Frackowiak. | 1702.04595#42 | 1702.04595#44 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#44 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | Automatic classiï¬ cation of mr scans in alzheimerâ s disease. Brain, 131(3):681â 689, 2008. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬ cation with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097â 1105, 2012. Janaina Mourao-Miranda, Arun LW Bokde, Christine Born, Harald Hampel, and Martin Stetter. | 1702.04595#43 | 1702.04595#45 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#45 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | Classifying brain states and determining the discriminating activation patterns: Support vector machine on functional mri data. NeuroImage, 28(4):980â 995, 2005. Marko Robnik-Å ikonja and Igor Kononenko. Explaining classiï¬ cations for individual instances. Knowledge and Data Engineering, IEEE Transactions on, 20(5):589â 600, 2008. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211â 252, 2015. doi: 10.1007/s11263-015-0816-y. Shayan Shahand, Ammar Benabdelkader, Mohammad Mahdi Jaghoori, Mostapha al Mourabit, Jordi Huguet, Matthan WA Caan, Antoine HC Kampen, and Sà lvia D Olabarriaga. | 1702.04595#44 | 1702.04595#46 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#46 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | A data-centric neuroscience gateway: design, implementation, and experiences. Concurrency and Computation: Practice and Experience, 27(2): 489â 506, 2015. Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, and Anshul Kundaje. Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713, 2016. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classiï¬ cation models and saliency maps. arXiv preprint arXiv:1312.6034, 2013. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1â | 1702.04595#45 | 1702.04595#47 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#47 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | 9, 2015. Ze Wang, Anna R Childress, Jiongjiong Wang, and John A Detre. Support vector machine learning-based fmri data group analysis. NeuroImage, 36(4):1139â 1151, 2007. Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computer visionâ ECCV 2014, pp. 818â 833. Springer, 2014. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. | 1702.04595#46 | 1702.04595#48 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#48 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921â 2929, 2016. 11 Published as a conference paper at ICLR 2017 A RANDOM RESULTS eS t-for-two (1) spatula (47) stinkhorn hermit crab (1) cash machine (1) dishrag (4) squirrel monkey (1) car wheel (1) handkerchief (1) Parachute (1) scuba diver (3) chambered nautilus (1) 1) goose (1) langur (1) bullet train (1 groom (1) handkerchief (2) mixing bowl (1) croquet ball megalith (1) throne (1) loggerhead (1) redbone (1) ; hamster (1) boathouse (1) coffeepot (4) envelope (1) Figure 13: Results on 34 randomly chosen ImageNet images. Middle columns: original image; left columns: sensitivity maps (Simonyan et al., 2013) where the red pixels indicate high sensitivity, and white pixels mean no sensitivity (note that we show the absolute values of the partial derivatives, since the sign cannot be interpreted like in our method); right columns: results from our method. For both methods, we visualize the results with respect to the correct class which is given above the image. In brackets we see how the classiï¬ er ranks this class, i.e., a (1) means it was correctly classiï¬ ed, whereas a (4) means that it was misclassiï¬ ed, and the correct class was ranked fourth. For our method, red areas show evidence for the correct class, and blue areas show evidence against the class (e.g., the scuba diver looks more like a tea pot to the classiï¬ er). | 1702.04595#47 | 1702.04595#49 | 1702.04595 | [
"1506.06579"
]
|
1702.04595#49 | Visualizing Deep Neural Network Decisions: Prediction Difference Analysis | 12 | 1702.04595#48 | 1702.04595 | [
"1506.06579"
]
|
|
1702.03044#0 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | 7 1 0 2 g u A 5 2 ] V C . s c [ 2 v 4 4 0 3 0 . 2 0 7 1 : v i X r a Published as a conference paper at ICLR 2017 # INCREMENTAL NETWORK QUANTIZATION: TOWARDS LOSSLESS CNNS WITH LOW-PRECISION WEIGHTS Aojun Zhouâ , Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen Intel Labs China {aojun.zhou, anbang.yao, yiwen.guo, lin.x.xu, yurong.chen}@intel.com # ABSTRACT This paper presents incremental network quantization (INQ), a novel method, tar- geting to efï¬ ciently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as beneï¬ t- ing from two innovations. On one hand, we introduce three interdependent oper- ations, namely weight partition, group-wise quantization and re-training. A well- proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the ï¬ rst group are respon- sible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classiï¬ cation task using al- most all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efï¬ cacy of the proposed method. | 1702.03044#1 | 1702.03044 | [
"1605.04711"
]
|
|
1702.03044#1 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Speciï¬ cally, at 5-bit quantization (a variable-length encoding: 1 bit for representing zero value, and the remaining 4 bits represent at most 16 different values for the powers of two) 1, our models have improved accuracy than the 32-bit ï¬ oating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit ï¬ oating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. We believe that our method sheds new insights on how to make deep CNNs to be applicable on mobile or embed- ded devices. The code is available at https://github.com/Zhouaojun/Incremental- Network-Quantization. | 1702.03044#0 | 1702.03044#2 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#2 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | 1 # INTRODUCTION Deep convolutional neural networks (CNNs) have demonstrated record breaking results on a variety of computer vision tasks such as image classiï¬ cation (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015), face recognition (Taigman et al., 2014; Sun et al., 2014), semantic segmentation (Long et al., 2015; Chen et al., 2015a) and object detection (Girshick, 2015; Ren et al., 2015). | 1702.03044#1 | 1702.03044#3 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#3 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Regardless of the availability of signiï¬ cantly improved training resources such as abundant annotated data, powerful computational platforms and diverse training frameworks, the promising results of deep CNNs are mainly attributed to the large number of learnable parameters, ranging from tens of millions to even hundreds of millions. Recent progress further shows clear evidence that CNNs could easily enjoy the accuracy gain from the increased network depth and width (He et al., 2016; Szegedy et al., 2015; 2016). | 1702.03044#2 | 1702.03044#4 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#4 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | However, this in turn lays heavy burdens on the memory and â This work was done when Aojun Zhou was an intern at Intel Labs China, supervised by Anbang Yao who proposed the original idea and is responsible for correspondence. The ï¬ rst three authors contributed equally to the writing of the paper. # 1This notation applies to our method throughout the paper. 1 Published as a conference paper at ICLR 2017 other computational resources. For instance, ResNet-152, a speciï¬ c instance of the latest residual network architecture wining ImageNet classiï¬ cation challenge in 2015, has a model size of about 230 MB and needs to perform about 11.3 billion FLOPs to classify a 224 à 224 image crop. There- fore, it is very challenging to deploy deep CNNs on the devices with limited computation and power budgets. Substantial efforts have been made to the speed-up and compression on CNNs during training, feed- forward test or both of them. Among existing methods, the category of network quantization meth- ods attracts great attention from researches and developers. Some network quantization works try to compress pre-trained full-precision CNN models directly. Gong et al. (2014) address the storage problem of AlexNet (Krizhevsky et al., 2012) with vector quantization techniques. By replacing the weights in each of the three fully connected layers with respective ï¬ oating-point centroid values obtained from the clustering, they can get over 20à model compression at about 1% loss in top-5 recognition rate. HashedNet (Chen et al., 2015b) uses a hash function to randomly map pre-trained weights into hash buckets, and all the weights in the same hash bucket are constrained to share a single ï¬ | 1702.03044#3 | 1702.03044#5 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#5 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | oating-point value. In HashedNet, only the fully connected layers of several shallow CNN models are considered. For better compression, Han et al. (2016) present deep compression method which combines the pruning (Han et al., 2015), vector quantization and Huffman coding, and re- duce the model storage by 35à on AlexNet and 49à on VGG-16 (Simonyan & Zisserman, 2015). Vanhoucke et al. (2011) use an SSE 8-bit ï¬ xed-point implementation to improve the computation of neural networks on the modern Intel x86 CPUs in feed-forward test, yielding 3à speed-up over an optimized ï¬ oating-point baseline. Training CNNs by substituting the 32-bit ï¬ oating-point rep- resentation with the 16-bit ï¬ xed-point representation has also been explored in Gupta et al. (2015). Other seminal works attempt to restrict CNNs into low-precision versions during training phase. Soudry et al. (2014) propose expectation backpropagation (EBP) to estimate the posterior distribu- tion of deterministic network weights. With EBP, the network weights can be constrained to +1 and -1 during feed-forward test in a probabilistic way. BinaryConnect (Courbariaux et al., 2015) further extends the idea behind EBP to binarize network weights during training phase directly. It has two versions of network weights: ï¬ oating-point and binary. The ï¬ oating-point version is used as the reference for weight binarization. | 1702.03044#4 | 1702.03044#6 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#6 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | BinaryConnect achieves state-of-the-art accuracy using shallow CNNs for small datasets such as MNIST (LeCun et al., 1998) and CIFAR-10. Later on, a series of efforts have been invested to train CNNs with low-precision weights, low-precision activations and even low-precision gradients, including but not limited to BinaryNet (Courbariaux et al., 2016), XNOR-Net (Rastegari et al., 2016), ternary weight network (TWN) (Li & Liu, 2016), DoReFa-Net (Zhou et al., 2016) and quantized neural network (QNN) (Hubara et al., 2016). Despite these tremendous advances, CNN quantization still remains an open problem due to two crit- ical issues which have not been well resolved yet, especially under scenarios of using low-precision weights for quantization. | 1702.03044#5 | 1702.03044#7 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#7 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | The ï¬ rst issue is the non-negligible accuracy loss for CNN quantization methods, and the other issue is the increased number of training iterations for ensuring convergence. In this paper, we attempt to address these two issues by presenting a novel incremental network quantization (INQ) method. In our INQ, there is no assumption on the CNN architecture, and its basic goal is to efï¬ ciently convert any pre-trained full-precision (i.e., 32-bit ï¬ oating-point) CNN model into a low-precision version whose weights are constrained to be either powers of two or zero. The advantage of such kind of low-precision models is that the original ï¬ oating-point multiplication operations can be replaced by cheaper binary bit shift operations on dedicated hardware like FPGA. We noticed that most existing network quantization methods adopt a global strategy in which all the weights are simultaneously converted to low-precision ones (that are usually in the ï¬ oating-point types). That is, they have not considered the different importance of network weights, leaving the room to retain network accu- racy limited. In sharp contrast to existing methods, our INQ makes a very careful handling for the model accuracy drop from network quantization. | 1702.03044#6 | 1702.03044#8 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#8 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | To be more speciï¬ c, it incorporates three interde- pendent operations: weight partition, group-wise quantization and re-training. Weight partition uses a pruning-inspired measure (Han et al., 2015; Guo et al., 2016) to divide the weights in each layer of a pre-trained full-precision CNN model into two disjoint groups which play complementary roles in our INQ. The weights in the ï¬ rst group are quantized to be either powers of two or zero by a variable-length encoding method, forming a low-precision base for the original model. The weights in the other group are re-trained while keeping the quantized weights ï¬ xed, compensating for the accuracy loss resulted from the quantization. Furthermore, these three operations are repeated on the | 1702.03044#7 | 1702.03044#9 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#9 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | 2 Published as a conference paper at ICLR 2017 50% (1) 75% · · · (2) 100% (a) (b) (c) (a) Pre-trained full- Figure 1: An overview of our incremental network quantization method. precision model used as a reference. (b) Model update with three proposed operations: weight partition, group-wise quantization (green connections) and re-training (blue connections). (c) Final low-precision model with all the weights constrained to be either powers of two or zero. | 1702.03044#8 | 1702.03044#10 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#10 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | In the ï¬ g- ure, operation (1) represents a single run of (b), and operation (2) denotes the procedure of repeating operation (1) on the latest re-trained weight group until all the non-zero weights are quantized. Our method does not lead to accuracy loss when using 5-bit, 4-bit and even 3-bit approximations in net- work quantization. For better visualization, here we just use a 3-layer fully connected network as an illustrative example, and the newly re-trained weights are divided into two disjoint groups of the same size at each run of operation (1) except the last run which only performs quantization on the re-trained ï¬ oating-point weights occupying 12.5% of the model weights. latest re-trained weight group in an iterative manner until all the weights are quantized, acting as an incremental network quantization and accuracy enhancement procedure (as illustrated in Figure 1). The main insight of our INQ is that a compact combination of the proposed weight partition, group- wise quantization and re-training operations has the potential to get a lossless low-precision CNN model from any full-precision reference. We conduct extensive experiments on the ImageNet large scale classiï¬ cation task using almost all known deep CNN architectures to validate the effective- ness of our method. We show that: (1) For AlexNet, VGG-16, GoogleNet and ResNets with 5-bit quantization, INQ achieves improved accuracy in comparison with their respective full-precision baselines. The absolute top-1 accuracy gain ranges from 0.13% to 2.28%, and the absolute top-5 accuracy gain is in the range of 0.23% to 1.65%. (2) INQ has the property of easy convergence in training. In general, re-training with less than 8 epochs could consistently generate a lossless model with 5-bit weights in the experiments. (3) Taking ResNet-18 as an example, our quantized models with 4-bit, 3-bit and 2-bit ternary weights also have improved or very similar accuracy compared with its 32-bit ï¬ oating-point baseline. (4) Taking AlexNet as an example, the combination of our network pruning and INQ outperforms deep compression method (Han et al., 2016) with signiï¬ | 1702.03044#9 | 1702.03044#11 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#11 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | cant margins. # INCREMENTAL NETWORK QUANTIZATION In this section, we clarify the insight of our INQ, describe its key components, and detail its imple- mentation. 2.1 WEIGHT QUANTIZATION WITH VARIABLE-LENGTH ENCODING Suppose a pre-trained full-precision (i.e., 32-bit ï¬ oating-point) CNN model can be represented by {Wl : 1 â ¤ l â ¤ L}, where Wl denotes the weight set of the lth layer, and L denotes the number of learnable layers in the model. To simplify the explanation, we only consider convolutional layers and fully connected layers. For CNN models like AlexNet, VGG-16, GoogleNet and ResNets as tested in this paper, Wl can be a 4D tensor for the convolutional layer, or a 2D matrix for the fully connected layer. For simplicity, here the dimension difference is not considered in the expression. Given a pre-trained full-precision CNN model, the main goal of our INQ is to convert all 32-bit ï¬ oating-point weights to be either powers of two or zero without loss of model accuracy. Besides, we also attempt to explore the limit of the expected bit-width under the premise of guaranteeing lossless network quantization. Here, we start with our basic network quantization method on how to | 1702.03044#10 | 1702.03044#12 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#12 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | 3 Published as a conference paper at ICLR 2017 convert Wl to be a low-precision version Wl, and each of its entries is chosen from Pl = {±2 c n1, · · · , ±2 n2, 0}, (1) where n1 and n2 are two integer numbers, and they satisfy n2 â ¤ n1. Mathematically, n1 and n2 help to bound Pl in the sense that its non-zero elements are constrained to be in the range of either [â 2n1, â 2n2] or [2n2, 2n1]. That is, network weights with absolute values smaller than 2n2 will be pruned away (i.e., set to zero) in the ï¬ nal low-precision model. Obviously, the problem is how to determine n1 and n2. In our INQ, the expected bit-width b for storing the indices in Pl is set beforehand, thus the only hyper-parameter shall be determined is n1 because n2 can be naturally computed once b and n1 are available. Here, n1 is calculated by using a tricky yet practically effective formula as n1 = ï¬ oor(log2(4s/3)), (2) where ï¬ oor(·) indicates the round down operation and s is calculated by using s = max(abs(Wl)), (3) where abs(·) is an element-wise operation and max(·) outputs the largest element of its input. In fact, Equation (2) helps to match the rounding power of 2 for s, and it could be easily implemented in practical programming. After n1 is obtained, n2 can be naturally determined as n2 = n1 + 1 â 2(bâ 1)/2. For instance, if b = 3 and n1 = â 1, it is easy to get n2 = â 2. Once Pl is determined, we further use the ladder of powers to convert every entry of Wl into a low-precision one by using se Bsgn(Wi(i,7)) if (a + B)/2 < abs(Wi(i, j)) < 36/2 Wii = 4 G3) {5 otherwise, a # c | 1702.03044#11 | 1702.03044#13 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#13 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | where α and β are two adjacent elements in the sorted Pl, making the above equation as a numerical rounding to the quantum values. It should be emphasized that factor 4/3 in Equation (2) is set to make sure that all the elements in Pl correspond with the quantization rule deï¬ ned in Equation (4). In other words, factor 4/3 in Equation (2) highly correlates with factor 3/2 in Equation (4). Here, an important thing we want to clarify is the deï¬ nition of the expected bit-width b. Taking 5-bit quantization as an example, since zero value cannot be written as the power of two, we use 1 bit to represent zero value, and the remaining 4 bits to represent at most 16 different values for the powers of two. That is, the number of candidate quantum values is at most 2bâ 1 + 1, so our quantization method actually adopts a variable-length encoding scheme. It is clear that the quantization described above is performed in a linear scale. An alternative solution is to perform the quantization in the log scale. | 1702.03044#12 | 1702.03044#14 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#14 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Although it may also be effective, it should be a little bit more difï¬ cult in implementation and may cause some extra computational overhead in comparison to our method. INCREMENTAL QUANTIZATION STRATEGY We can naturally use the above described method to quantize any pre-trained full-precision CNN model. However, noticeable accuracy loss appeared in the experiments when using small bit-width values (e.g., 5-bit, 4-bit, 3-bit and 2-bit). In the literature, there are many existing network quantization works such as HashedNet (Chen et al., 2015b), vector quantization (Gong et al., 2014), ï¬ xed-point representation (Vanhoucke et al., 2011; Gupta et al., 2015), BinaryConnect (Courbariaux et al., 2015), BinaryNet (Courbariaux et al., 2016), XNOR-Net (Rastegari et al., 2016), TWN (Li & Liu, 2016), DoReFa-Net (Zhou et al., 2016) and QNN (Hubara et al., 2016). Similar to our basic network quantization method, they also suffer from non-negligible accuracy loss on deep CNNs, especially when being applied on the ImageNet large scale classiï¬ cation dataset. For all these methods, a common fact is that they adopt a global strategy in which all the weights are simultaneously converted into low-precision ones, which in turn causes accuracy loss. Compared with the methods focusing on the pre-trained models, accuracy loss becomes worse for the methods such as XNOR-Net, TWN, DoReFa-Net and QNN which intend to train low-precision CNNs from scratch. Recall that our main goal is to achieve lossless low-precision quantization for any pre-trained full- precision CNN model with no assumption on its architecture. | 1702.03044#13 | 1702.03044#15 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#15 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | To this end, our INQ makes a special 4 Published as a conference paper at ICLR 2017 0.01 0.02 -0.20 0.04 0.33 0.01 0.02 -0.20 0.04 2â 2 0.11 0.04 -0.7 0.19 0.17 -0.42 -0.33 0.02 -0.05 0.17 -2â 1 -2â 2 0.02 -0.05 0.15 -2â 1 -2â 2 -0.09 0.02 0.83 -0.03 0.03 0.06 0.02 20 -0.03 0.03 0.06 -0.02 20 -0.06 0.21 -0.90 0.07 0.11 0.87 -0.36 -20 0.07 0.11 20 -2â 2 -20 0.27 -0.09 20 -0.73 0.41 0.42 0.39 0.47 -2â 1 2â 1 2â 1 2â 1 2â 1 -2â 1 2â 1 2â 1 2â 1 2â 3 0 -2â 1 2â 2 2â 2 2â 3 -0.05 -2â 1 2â 2 2â 2 2â 3 0.03 -2â 1 2â 2 2â 3 -2â 1 -2â 2 -2â 3 0 2â 3 -2â 1 -2â 2 -2â 3 -0.02 2â 3 -2â 1 -2â | 1702.03044#14 | 1702.03044#16 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#16 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | 2 -0.13 0 20 -2â 3 2â 2 2â 3 0.02 20 2â 3 2â 2 2â 3 -0.03 20 -0.11 2â 2 -20 2â 3 0 20 -2â 2 -20 2â 3 -0.04 20 -2â 2 -20 0.091 -0.01 20 -2â 1 2â 1 2â 1 2â 1 2â 1 -2â 1 2â 1 2â 1 2â 1 2â 1 -2â 1 2â 1 2â 1 2â 1 2â 2 -0.02 0.15 -2â 2 2â 1 2â 2 -0.01 2â 3 -2â 2 2â 1 Figure 2: Result illustrations. First row: results from the 1st iteration of the proposed three oper- ations. The top left cube illustrates weight partition operation generating two disjoint groups, the middle image illustrates the quantization operation on the ï¬ rst weight group (green cells), and the top right cube illustrates the re-training operation on the second weight group (light blue cells). Sec- ond row: results from the 2nd, 3rd and 4th iterations of the INQ. In the ï¬ gure, the accumulated portion of the weights which have been quantized undergoes from 50%â 75%â 87.5%â 100%. | 1702.03044#15 | 1702.03044#17 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#17 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | handling of the strategy for suppressing resulting quantization loss in model accuracy. We are par- tially inspired by the latest progress in network pruning (Han et al., 2015; Guo et al., 2016). In these methods, the accuracy loss from removing less important network weights of a pre-trained neural network model could be well compensated by following re-training steps. Therefore, we conjec- ture that the nature of changing network weight importance is critical to achieve lossless network quantization. Base on this assumption, we present INQ which incorporates three interdependent operations: weight partition, group-wise quantization and re-training. Weight partition is to divide the weights in each layer of a pre-trained full-precision CNN model into two disjoint groups which play comple- mentary roles in our INQ. The weights in the ï¬ rst group are responsible for forming a low-precision base for the original model, thus they are quantized by using Equation (4). The weights in the second group adapt to compensate for the loss in model accuracy, thus they are the ones to be re-trained. | 1702.03044#16 | 1702.03044#18 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#18 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Once the ï¬ rst run of the quantization and re-training operations is ï¬ nished, all the three operations are further conducted on the second weight group in an iterative manner, until all the weights are converted to be either powers of two or zero, acting as an incremental network quantization and accuracy enhancement procedure. As a result, accuracy loss under low-precision CNN quantization can be well suppressed by our INQ. Illustrative results at iterative steps of our INQ are provided in Figure 2. For the lth layer, weight partition can be deï¬ | 1702.03044#17 | 1702.03044#19 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#19 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | ned as A(1) l â ª A(2) l = {Wl(i, j)}, and A(1) l â © A(2) l = â , (5) where A(1) denotes the ï¬ rst weight group that needs to be quantized, and A2 denotes the other weight group that needs to be re-trained. We leave the strategies for group partition to be chosen in the experiment section. Here, we deï¬ ne a binary matrix Tl to help distinguish above two categories of weights. That is, Tl(i, j) = 0 means Wl(i, j) â A(1) , and Tl(i, j) = 1 means Wl(i, j) â A(2) 5 | 1702.03044#18 | 1702.03044#20 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#20 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Published as a conference paper at ICLR 2017 INCREMENTAL NETWORK QUANTIZATION ALGORITHM Now, we come to the training method. Taking the lth layer as an example, the basic optimization problem of making its weights to be either powers of two or zero can be expressed as E(Wl) = L(Wl) + λR(Wl) min Wl s.t. Wl(i, j) â Pl, 1 â ¤ l â ¤ L, (6) where L(Wl) is the network loss, R(Wl) is the regularization term, λ is a positive coefï¬ cient, and the constraint term indicates each weight entry Wl(i, j) should be chosen from the set Pl consisting of a ï¬ xed number of the values of powers of two plus zero. | 1702.03044#19 | 1702.03044#21 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#21 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Direct solving above optimization problem in training from scratch is challenging since it is very easy to undergo convergence problem. By performing weight partition and group-wise quantization operations beforehand, the optimiza- tion problem deï¬ ned in (6) can be reshaped into a easier version. That is, we only need to optimize the following objective function E(Wl) = L(Wl) + λR(Wl) min Wl s.t. Wl(i, j) â Pl, if Tl(i, j) = 0, 1 â ¤ l â ¤ L, (7) | 1702.03044#20 | 1702.03044#22 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#22 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | where Pl is determined at group-wise quantization operation, and the binary matrix Tl acts as a mask which is determined by weight partition operation. Since Pl and Tl are known, the optimiza- tion problem (7) can be solved using popular stochastic gradient decent (SGD) method. That is, in INQ, we can get the update scheme for the re-training as Wl(i, j) â Wl(i, j) â γ â E â (Wl(i, j)) Tl(i, j), (8) where γ is a positive learning rate. Note that the binary matrix Tl forces zero update to the weights that have been quantized. That is, only the weights still keep with ï¬ oating-point values are updated, akin to the latest pruning methods (Han et al., 2015; Guo et al., 2016) in which only the weights that are not currently removed are re-trained to enhance network accuracy. The whole procedure of our INQ is summarized as Algorithm 1. We would like to highlight that the merits of our INQ are in three aspects: (1) Weight partition in- troduces the importance-aware weight quantization. (2) Group-wise weight quantization introduces much less accuracy loss than simultaneously quantizing all the network weights, thus making re- training have larger room to recover model accuracy. (3) By integrating the operations of weight partition, group-wise quantization and re-training into a nested loop, our INQ has the potential to obtain lossless low-precision CNN model from the pre-trained full-precision reference. # Algorithm 1 Incremental network quantization for lossless CNNs with low-precision weights. the training data, {Wl Input: X: : 1 â ¤ l â ¤ L}: {Ï 1, Ï 2, · · · , Ï N }: the accumulated portions of weights quantized at iterative steps the pre-trained full-precision CNN model, Output: { Wl : 1 â ¤ l â ¤ L}: the ï¬ | 1702.03044#21 | 1702.03044#23 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#23 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | nal low-precision model with the weights constrained to be either powers of two or zero # c l â â , A(2) 1: Initialize A(1) 2: for n = 1, 2, . . . , N do 3: l â {Wl(i, j)}, Tl â 1, for 1 â ¤ l â ¤ L 1: Initialize A? â 0, AP &â (Wi(i,f)}, T) <1, for 1 <1< L Reset the base learning rate and the learning policy According to Ï n, perform layer-wise weight partition and update A(1) Based on A(1) Quantize the weights in A(1) by Equation (4) layer-wisely Calculate feed-forward loss, and update weights in {A(2) , A(2) l # and Tl 4 , determine Pl layer-wisely 5: 6: : 1 â ¤ l â ¤ L} by Equation (8) 7: 8: end for l 6 | 1702.03044#22 | 1702.03044#24 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#24 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Published as a conference paper at ICLR 2017 # 3 EXPERIMENTAL RESULTS To analyze the performance of our INQ, we perform extensive experiments on the ImageNet large scale classiï¬ cation task, which is known as the most challenging image classiï¬ cation benchmark so far. ImageNet dataset has about 1.2 million training images and 50 thousand validation images. Each image is annotated as one of 1000 object classes. We apply our INQ to AlexNet, VGG-16, GoogleNet, ResNet-18 and ResNet-50, covering almost all known deep CNN architectures. Using the center crops of validation images, we report the results with two standard measures: top-1 error rate and top-5 error rate. For fair comparison, all pre-trained full-precision (i.e., 32-bit ï¬ oating- point) CNN models except ResNet-18 are taken from the Caffe model zoo2. Note that He et al. (2016) do not release their pre-trained ResNet-18 model to the public, so we use a publicly available re-implementation by Facebook3. Since our method is implemented with Caffe, we make use of an open source tool4 to convert the pre-trained ResNet-18 model from Torch to Caffe. 3.1 RESULTS ON IMAGENET | 1702.03044#23 | 1702.03044#25 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#25 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Table 1: Our INQ well converts diverse full-precision deep CNN models (including AlexNet, VGG- 16, GoogleNet, ResNet-18 and ResNet-50) to 5-bit low-precision versions with consistently im- proved model accuracy. Network AlexNet ref AlexNet VGG-16 ref VGG-16 GoogleNet ref GoogleNet ResNet-18 ref ResNet-18 ResNet-50 ref ResNet-50 Bit-width Top-1 error Top-5 error Decrease in top-1/top-5 error 32 5 32 5 32 5 32 5 32 5 42.76% 42.61% 31.46% 29.18% 31.11% 30.98% 31.73% 31.02% 26.78% 25.19% 19.77% 19.54% 11.35% 9.70% 10.97% 10.72% 11.31% 10.90% 8.76% 7.55% 0.15%/0.23% 2.28%/1.65% 0.13%/0.25% 0.71%/0.41% 1.59%/1.21% Setting expected bit-width to 5, the ï¬ rst set of experiments is performed to testify the efï¬ | 1702.03044#24 | 1702.03044#26 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#26 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | cacy of our INQ on different CNN architectures. Regarding weight partition, there are several candidate strate- gies as we tried in our previous work for efï¬ cient network pruning (Guo et al., 2016). In Guo et al. (2016), we found random partition and pruning-inspired partition are the two best choices compared with the others. Thus in this paper, we directly compare these two strategies for weight partition. In random strategy, the weights in each layer of any pre-trained full-precision deep CNN model are randomly split into two disjoint groups. In pruning-inspired strategy, the weights are divided into two disjoint groups by comparing their absolute values with layer-wise thresholds which are auto- matically determined by a given splitting ratio. Here we directly use pruning-inspired strategy and the experimental results in Section 3.2 will show why. After the re-training with no more than 8 epochs over each pre-trained full-precision model, we obtain the results as shown in Table 1. It can be concluded that the 5-bit CNN models generated by our INQ show consistently improved top-1 and top-5 recognition rates compared with respective full-precision references. | 1702.03044#25 | 1702.03044#27 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#27 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Parameter settings are described below. AlexNet: AlexNet has 5 convolutional layers and 3 fully-connected layers. We set the accumulated portions of quantized weights at iterative steps as {0.3, 0.6, 0.8, 1}, the batch size as 256, the weight decay as 0.0005, and the momentum as 0.9. VGG-16: Compared with AlexNet, VGG-16 has 13 convolutional layers and more parameters. We set the accumulated portions of quantized weights at iterative steps as {0.5, 0.75, 0.875, 1}, the batch size as 32, the weight decay as 0.0005, and the momentum as 0.9. # 2https://github.com/BVLC/caffe/wiki/Model-Zoo 3https://github.com/facebook/fb.resnet.torch/tree/master/pretrained 4https://github.com/zhanghang1989/fb-caffe-exts | 1702.03044#26 | 1702.03044#28 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#28 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | 7 Published as a conference paper at ICLR 2017 GoogleNet: Compared with AlexNet and VGG-16, GoogleNet is more difï¬ cult to quantize due to a smaller number of parameters and the increased network width. We set the accumulated portions of quantized weights at iterative steps as {0.2, 0.4, 0.6, 0.8, 1}, the batch size as 80, the weight decay as 0.0002, and the momentum as 0.9. ResNet-18: Different from above three networks, ResNets have batch normalization layers and relief the vanishing gradient problem by using shortcut connections. | 1702.03044#27 | 1702.03044#29 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#29 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | We ï¬ rst test the 18-layer version for exploratory purpose and test the 50-layer version later on. The network architectures of ResNet- 18 and ResNet-34 are very similar. The only difference is the number of ï¬ lters in every convolutional layer. We set the accumulated portions of quantized weights at iterative steps as {0.5, 0.75, 0.875, 1}, the batch size as 80, the weight decay as 0.0005, and the momentum as 0.9. ResNet-50: | 1702.03044#28 | 1702.03044#30 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#30 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Besides signiï¬ cantly increased network depth, ResNet-50 has a more complex network architecture in comparison to ResNet-18. However, regarding network architecture, ResNet-50 is very similar to ResNet-101 and ResNet-152. The only difference is the number of ï¬ lters in every convolutional layer. We set the accumulated portions of quantized weights at iterative steps as {0.5, 0.75, 0.875, 1}, the batch size as 32, the weight decay as 0.0005, and the momentum as 0.9. 3.2 ANALYSIS OF WEIGHT PARTITION STRATEGIES | 1702.03044#29 | 1702.03044#31 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#31 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | In our INQ, the ï¬ rst operation is weight partition whose result will directly affect the following group-wise quantization and re-training operations. Therefore, the second set of experiments is conducted to analyze two candidate strategies for weight partition. As mentioned in the previous section, we use pruning-inspired strategy for weight partition. Unlike random strategy in which all the weights have equal probability to fall into the two disjoint groups, pruning-inspired strategy considers that the weights with larger absolute values are more important than the smaller ones to form a low-precision base for the original CNN model. We use ResNet-18 as a test case to compare the performance of these two strategies. In the experiments, the parameter settings are completely the same as described in Section 3.1. We set 4 epochs for weight re-training. Table 2 summarizes the results of our INQ with 5-bit quantization. It can be seen that our INQ achieves top-1 error rate of 32.11% and top-5 error rate of 11.73% by using random partition. Comparatively, pruning-inspired partition brings 1.09% and 0.83% decrease in top-1 and top-5 error rates, respectively. Apparently, pruning-inspired partition is better than random partition, and this is the reason why we use it in this paper. For future works, weight partition based on quantization error could also be an option worth exploring. | 1702.03044#30 | 1702.03044#32 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#32 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Table 2: Comparison of two different strategies for weight partition on ResNet-18. Strategy Random partition Pruning-inspired partition Bit-width Top-1 error Top-5 error 5 5 32.11% 31.02% 11.73% 10.90% 3.3 THE TRADE-OFF BETWEEN EXPECTED BIT-WIDTH AND MODEL ACCURACY The third set of experiments is performed to explore the limit of the expected bit-width under which our INQ can still achieve lossless network quantization. Similar to the second set of experiments, we also use ResNet-18 as a test case, and the parameter settings for the batch size, the weight decay and the momentum are completely the same. Finally, lower-precision models with 4-bit, 3-bit and even 2-bit ternary weights are generated for comparisons. As the expected bit-width goes down, the number of candidate quantum values will be decreased signiï¬ cantly, thus we shall increase the number of iterative steps accordingly for enhancing the accuracy of ï¬ nal low-precision model. Speciï¬ cally, we set the accumulated portions of quantized weights at iterative steps as {0.3, 0.5, 0.8, 0.9, 0.95, 1}, {0.2, 0.4, 0.6, 0.7, 0.8, 0.9, 0.95, 1} and {0.2, 0.4, 0.6, 0.7, 0.8, 0.85, 0.9, 0.95, 0.975, 1} for 4-bit, 3-bit and 2-bit ternary models, respectively. The required number of epochs also increases when the expected bit-width goes down, and it reaches 30 when training our 2-bit ternary model. Although our 4-bit model shows slightly decreased accuracy when compared with the 5-bit model, its accuracy is still better than that of the pre-trained full-precision model. Comparatively, even when the expected bit-width goes down to 3, our low-precision model shows only 0.19% and | 1702.03044#31 | 1702.03044#33 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#33 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | 8 Published as a conference paper at ICLR 2017 0.33% losses in top-1 and top-5 recognition rates, respectively. As for our 2-bit ternary model, although it incurs 2.25% decrease in top-1 error rate and 1.56% decrease in top-5 error rate in comparison to the pre-trained full-precision reference, its accuracy is considerably better than state- of-the-art results reported for binary-weight network (BWN) (Rastegari et al., 2016) and ternary weight network (TWN) (Li & Liu, 2016). Detailed results are summarized in Table 3 and Table 4. Table 3: | 1702.03044#32 | 1702.03044#34 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#34 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Our INQ generates extremely low-precision (4-bit and 3-bit) models with improved or very similar accuracy compared with the full-precision ResNet-18 model. Model ResNet-18 ref INQ INQ INQ INQ Bit-width 32 5 4 3 2 (ternary) Top-1 error Top-5 error 31.73% 31.02% 31.11% 31.92% 33.98% 11.31% 10.90% 10.99% 11.64% 12.87% | 1702.03044#33 | 1702.03044#35 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#35 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Table 4: Comparison of our 2-bit ternary model and some other binary or ternary models, including the BWN and the TWN approximations of ResNet-18. Method BWN(Rastegari et al., 2016) TWN(Li & Liu, 2016) INQ (ours) Bit-width 1 2 (ternary) 2 (ternary) Top-1 error Top-5 error 39.20% 38.20% 33.98% 17.00% 15.80% 12.87% 3.4 LOW-BIT DEEP COMPRESSION In the literature, recently proposed deep compression method (Han et al., 2016) reports so far best results on network compression without loss of model accuracy. Therefore, the last set of experi- ments is conducted to explore the potential of our INQ for much better deep compression. Note that Han et al. (2016) is a hybrid network compression solution combining three different techniques, namely network pruning (Han et al., 2015), vector quantization (Gong et al., 2014) and Huffman coding. Taking AlexNet as an example, network pruning gets 9Ã compression, however this re- sult is mainly obtained from the fully connected layers. Actually its compression performance on the convolutional layers is less than 3Ã | 1702.03044#34 | 1702.03044#36 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#36 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | (as can be seen in the Table 4 of Han et al. (2016)). Be- sides, network pruning is realized by separately performing pruning and re-training in an iterative way, which is very time-consuming. It will cost at least several weeks for compressing AlexNet. We solved this problem by our dynamic network surgery (DNS) method (Guo et al., 2016) which achieves about 7à speed-up in training and improves the performance of network pruning from 9à to 17.7à . In Han et al. (2016), after network pruning, vector quantization further improves com- pression ratio from 9à to 27à , and Huffman coding ï¬ nally boosts compression ratio up to 35à . For fair comparison, we combine our proposed INQ and DNS, and compare the resulting method with Han et al. (2016). Detailed results are summarized in Table 5. When combing our proposed INQ and DNS, we achieve much better compression results compared with Han et al. (2016). | 1702.03044#35 | 1702.03044#37 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#37 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Speciï¬ - cally, with 5-bit quantization, we can achieve 53à compression with slightly larger gains both in top-5 and top-1 recognition rates, yielding 51.43%/96.30% absolute improvement in compression performance compared with full version/fair version (i.e., the combination of network pruning and vector quantization) of Han et al. (2016), respectively. Consistently better results have also obtained for our 4-bit and 3-bit models. Besides, we also perform a set of experiments on AlexNet to compare the performance of our INQ and vector quantization (Gong et al., 2014). For fair comparison, re-training is also used to enhance the performance of vector quantization, and we set the number of cluster centers for all of 5 convo- lutional layers and 3 fully connect layers to 32 (i.e., 5-bit quantization). In the experiment, vector quantization incurs over 3% loss in model accuracy. When we change the number of cluster centers for convolutional layers from 32 to 128, it gets an accuracy loss of 0.98%. This is consistent with the results reported in (Gong et al., 2014). Comparatively, vector quantization is mainly proposed | 1702.03044#36 | 1702.03044#38 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#38 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | 9 Published as a conference paper at ICLR 2017 Table 5: Comparison of the combination of our INQ and DNS, and deep compression method on AlexNet. Conv: Convolutional layer, FC: Fully connected layer, P: Pruning, Q: Quantization, H: Huffman coding. Decrease in top-1/top5 error 0.00%/0.03% 0.00%/0.03% -0.01%/0.00% 0.08%/0.03% -1.99%/-2.60% -0.52%/-0.20% -1.47%/-0.96% Method Bit-width(Conv/FC) Compression ratio Han et al. (2016) (P+Q) Han et al. (2016) (P+Q+H) Han et al. (2016) (P+Q+H) Our method (P+Q) Han et al. (2016) (P+Q+H) Our method (P+Q) Our method (P+Q) 8/5 8/5 8/4 5/5 4/2 4/4 3/3 27Ã 35Ã - 53Ã - 71Ã 89Ã | 1702.03044#37 | 1702.03044#39 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#39 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | to compress the parameters in the fully connected layers of a pre-trained full-precision CNN model, while our INQ addresses all network layers simultaneously and has no accuracy loss for 5-bit and 4-bit quantization. Therefore, it is evident that our INQ is much better than vector quantization. Last but not least, the ï¬ nal weights for vector quantization (Gong et al., 2014), network pruning (Han et al., 2015) and deep compression (Han et al., 2016) are still ï¬ oating-point values, but the ï¬ - nal weights for our INQ are in the form of either powers of two or zero. The direct advantage of our INQ is that the original ï¬ oating-point multiplication operations can be replaced by cheaper binary bit shift operations on dedicated hardware like FPGA. # 4 CONCLUSIONS In this paper, we present INQ, a new network quantization method, to address the problem of how to convert any pre-trained full-precision (i.e., 32-bit ï¬ oating-point) CNN model into a lossless low- precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which usually quantize all the network weights simultaneously, INQ is a more compact quantization framework. It incorporates three interdependent operations: weight partition, group- wise quantization and re-training. Weight partition splits the weights in each layer of a pre-trained full-precision CNN model into two disjoint groups which play complementary roles in INQ. The weights in the ï¬ rst group is directly quantized by a variable-length encoding method, forming a low-precision base for the original CNN model. The weights in the other group are re-trained while keeping all the quantized weights ï¬ xed, compensating for the accuracy loss from network quantiza- tion. More importantly, the operations of weight partition, group-wise quantization and re-training are repeated on the latest re-trained weight group in an iterative manner until all the weights are quantized, acting as an incremental network quantization and accuracy enhancement procedure. On the ImageNet large scale classiï¬ | 1702.03044#38 | 1702.03044#40 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#40 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | cation task, we conduct extensive experiments and show that our quantized CNN models with 5-bit, 4-bit, 3-bit and even 2-bit ternary weights have improved or at least comparable accuracy against their full-precision baselines, including AlexNet, VGG-16, GoogleNet and ResNets. As for future works, we plan to extend incremental idea behind INQ from low-precision weights to low-precision activations and low-precision gradients (we have actually already made some good progress on it, as shown in our supplementary materials). We will also investigate computation and power efï¬ ciency by implementing our low-precision CNN models on hardware platforms. | 1702.03044#39 | 1702.03044#41 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#41 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | # REFERENCES Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and L. Yuille Alan. Se- In ICLR, mantic image segmentation with deep convolutional nets and fully connected crfs. 2015a. Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. In ICML, 2015b. | 1702.03044#40 | 1702.03044#42 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#42 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | 10 Published as a conference paper at ICLR 2017 Matthieu Courbariaux, Bengio Yoshua, and David Jean-Pierre. Binaryconnect: Training deep neural networks with binary weights during propagations. In NIPS, 2015. Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or -1. arXiv preprint arXiv:1602.02830v3, 2016. | 1702.03044#41 | 1702.03044#43 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#43 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Ross Girshick. Fast r-cnn. In ICCV, 2015. Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep concolutional net- works using vector quantization. arXiv preprint arXiv:1412.6115v1, 2014. Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efï¬ cient dnns. In NIPS, 2016. Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In ICML, 2015. | 1702.03044#42 | 1702.03044#44 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#44 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Song Han, Jeff Pool, John Tran, and William J. Dally. Learning both weights and connections for efï¬ cient neural networks. In NIPS, 2015. Song Han, Jeff Pool, John Tran, and William J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, 2016. Kaiming He, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. Deep residual learning for image recog- nition. In CVPR, 2016. Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized neural networks: Training neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061v1, 2016. Alex Krizhevsky, Sutskever Ilya, and E. | 1702.03044#43 | 1702.03044#45 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#45 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Hinton Geoffrey. Imagenet classiï¬ cation with deep convo- lutional neural networks. In NIPS, 2012. Yann LeCun, Bottou Leon, Yoshua Bengio, and Patrick Hadner. Gradient-based learning applied to documentrecognition. In NIPS, 1998. Fengfu Li and Bin Liu. Ternary weight networks. arXiv preprint arXiv:1605.04711v1, 2016. Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. | 1702.03044#44 | 1702.03044#46 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#46 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classiï¬ cation using binary convolutional neural networks. arXiv preprint arXiv:1603.05279v4, 2016. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, 2015. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. | 1702.03044#45 | 1702.03044#47 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#47 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Daniel Soudry, Itay Hubara, and Ron Meir. Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights. In NIPS, 2014. Yi Sun, Xiaogang Wang, and Xiaoou Tang. Deep learning face representation from predicting 10,000 classes. In CVPR, 2014. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, 2015. Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261v1, 2016. | 1702.03044#46 | 1702.03044#48 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#48 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | 11 Published as a conference paper at ICLR 2017 Yaniv Taigman, Ming Yang, Marcâ Aurelio Ranzato, and Lior Wolf. Deepface: Closing the gap to human-level performance in face veriï¬ cation. In CVPR, 2014. Vincent Vanhoucke, Andrew Senior, and Mark Z. Mao. Improving the speed of neural networks on cpus. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS, 2011. Shuchang Zhou, Zekun Ni, Xinyu Zhou, He Wen, Yuxin Wu, and Yuheng Zou. | 1702.03044#47 | 1702.03044#49 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#49 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Dorefa-net: Train- ing low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1605.04711v1, 2016. 12 Published as a conference paper at ICLR 2017 # A APPENDIX 1: STATISTICAL ANALYSIS OF THE QUANTIZED WEIGHTS Taking our 5-bit AlexNet model as an example, we analyze the distribution of the quantized weights. Detailed statistical results are summarized in Table 6. | 1702.03044#48 | 1702.03044#50 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#50 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | We can ï¬ nd: (1) in the 1st and 2nd convolu- tional layers, the values of {â 2â 6, â 2â 5, â 2â 4, 2â 6, 2â 5, 2â 4} and {â 2â 8, â 2â 7, â 2â 6, â 2â 5, 0, 2â 8, 2â 7, 2â 6, 2â 5} occupy over 60% and 94% of all quantized weights, respectively; (2) the distributions of the quantized weights in the 3rd, 4th and 5th convolutional layers are similar to that of the 2nd convolutional layer, and more weights are quantized into zero in the 2nd, 3rd, 4th and 5th convolutional layers compared with the 1st convolutional layer; (3) in the 1st fully connected layer, the values of {â 2â 10, â 2â 9, â 2â 8, â 2â 7, 0, 2â 10, 2â 9, 2â 8, 2â 7} occupy about 98% of all quantized weights, and similar results can be seen for the 2nd fully connected layer; (4) gener- ally, the distributions of the quantized weights in the convolutional layers are usually more scattered compared with the fully connected layers. This may be partially the reason why it is much eas- ier to get good compression performance on fully connected layers in comparison to convolutional layers, when using methods such as network hashing (Chen et al., 2015b) and vector quantization (Gong et al., 2014); (5) for 5-bit AlexNet model, the required bit-width for each layer is actually 4 but not 5. | 1702.03044#49 | 1702.03044#51 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#51 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Table 6: A statistical distribution of the quantized weights in our 5-bit AlexNet model. Conv1 - - 5.04% 6.56% 3.43% 9.22% 0.002% 0.004% 0.40% 9.79% 10.52% 8.73% - - 0.55% - 2.70% 9.75% - - 0.004% - 0.39% 4.61% - - - - 0.01% 0.67% 3.62% 6.17% 8.86% 8.97% 11.30% 12.24% 9.70% 5.51% 3.40% 8.30% - 5.81% - - - 10.51% 7.84% - 4.69% - - - 12.91% 11.30% 8.08% 9.70% 5.20% 7.69% 10.44% 8.60% 11.90% 10.94% 11.01% 11.66% 10.33% 8.95% 6.79% 8.95% 12.56% 3.54% 11.05% 11.86% 12.25% 10.67% 1.12% 9.99% 6.37% 0.01% 6.81% 11.15% 6.57% 2.75% 0.06% 1eâ 5% 2eâ 5% 0.01% 1.62% 1.24% 10.14% 2.26% - 0.08% 0.53% 4.26% 0.16% - 0.003% 0.01% 0.05% 0.60% 3eâ 4% 2eâ 4% 3eâ 4% - - - 100% 100% 100% 4 4 4 B APPENDIX 2: LOSSLESS CNNS WITH LOW-PRECISION WEIGHTS AND LOW-PRECISION ACTIVATIONS | 1702.03044#50 | 1702.03044#52 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#52 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | Table 7: Comparison of our VGG-16 model with 5-bit weights and 4-bit activations, and the pre- trained reference with 32-bit ï¬ oating-point weights and 32-bit ï¬ oat-point activations. Bit-width for weight/activation 32/32 5/4 Decrease in top-1/top-5 error Network Top-1 error Top-5 error VGG-16 ref VGG-16 31.46% 29.82% 11.35% 10.19% 1.64%/1.16% | 1702.03044#51 | 1702.03044#53 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#53 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | 13 Published as a conference paper at ICLR 2017 Recently, we have made some good progress on developing our INQ for lossless CNNs with both low-precision weights and low-precision activations. According to the results summarized in Ta- ble 7, it can be seen that our VGG-16 model with 5-bit weights and 4-bit activations shows improved top-5 and top-1 recognition rates in comparison to the pre-trained reference with 32-bit ï¬ oating-point weights and 32-bit ï¬ oating-point activations. To the best of our knowledge, this should be the best results reported on VGG-16 architecture so far. | 1702.03044#52 | 1702.03044#54 | 1702.03044 | [
"1605.04711"
]
|
1702.03044#54 | Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights | 14 | 1702.03044#53 | 1702.03044 | [
"1605.04711"
]
|
|
1702.01806#0 | Beam Search Strategies for Neural Machine Translation | 7 1 0 2 2017 n u J 4 1 ] L C . s c [ 2 v 6 0 8 1 0 . 2 0 7 1 : v i X r a # Beam Search Strategies for Neural Machine Translation Markus Freitag and Yaser Al-Onaizan IBM T.J. Watson Research Center 1101 Kitchawan Rd, Yorktown Heights, NY 10598 {freitagm,onaizan}@us.ibm.com # Abstract | 1702.01806#1 | 1702.01806 | [
"1605.03209"
]
|
|
1702.01806#1 | Beam Search Strategies for Neural Machine Translation | The basic concept in Neural Machine Translation (NMT) is to train a large Neu- ral Network that maximizes the transla- tion performance on a given parallel cor- pus. NMT is then using a simple left-to- right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to- right while keeping a ï¬ xed amount of ac- tive candidates at each time step. First, this simple search is less adaptive as it also ex- pands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increas- ing the beam size until no performance im- provement can be observed. While you can reach better performance, this has the drawback of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more ï¬ exi- ble beam search strategy whose candidate size may vary at each time step depend- ing on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs Germanâ English and Chineseâ English without losing any translation quality. | 1702.01806#0 | 1702.01806#2 | 1702.01806 | [
"1605.03209"
]
|
1702.01806#2 | Beam Search Strategies for Neural Machine Translation | models (Jean et al., 2015; Luong et al., 2015), in the recent it has become very popular years 2013; (Kalchbrenner and Blunsom, Sutskever et al., 2014; Bahdanau et al., 2014). With the recent success of NMT, attention has shifted towards making it more practical. One of the challenges is the search strategy for extracting the best translation for a given source sentence. In NMT, new sentences are translated by a simple beam search decoder that ï¬ nds a translation that approximately maximizes the conditional proba- bility of a trained NMT model. The beam search strategy generates the translation word by word from left-to-right while keeping a ï¬ xed number (beam) of active candidates at each time step. By increasing the beam size, the translation perfor- mance can increase at the expense of signiï¬ cantly reducing the decoder speed. Typically, there is a saturation point at which the translation quality does not improve any more by further increasing the beam. | 1702.01806#1 | 1702.01806#3 | 1702.01806 | [
"1605.03209"
]
|
1702.01806#3 | Beam Search Strategies for Neural Machine Translation | The motivation of this work is two folded. First, we prune the search graph, thus, speed up the decoding process without losing any translation quality. Secondly, we observed that the best scoring candidates often share the same history and often come from the same partial hypothesis. We limit the amount of candidates coming from the same partial hypothesis to introduce more diversity without reducing the decoding speed by just using a higher beam. # 2 Related Work # 1 Introduction Due to the fact tion (NMT) better performance tional that Neural Machine Transla- is reaching comparable or even tradi- compared to the translation (SMT) statistical machine The original beam search for sequence to se- quence models has been introduced and described by (Graves, 2012; Boulanger-Lewandowski et al., 2013) and by (Sutskever et al., 2014) for neural (Hu et al., 2015; Mi et al., machine translation. 2016) improved the beam search with a constraint softmax function which only considered a limited word set of translation candidates to reduce the computation complexity. This has the advantage that they normalize only a small set of candidates and thus improve the decoding speed. (Wu et al., 2016) only consider tokens that have local scores that are not more than beamsize below the best token during their search. Further, the authors prune all partial hypotheses whose score are beam- size lower than the best ï¬ nal hypothesis (if one has already been generated). In this work, we investigate different absolute and relative pruning schemes which have successfully been applied in statistical machine translation for e.g. phrase table pruning (Zens et al., 2012). # 3 Original Beam Search The original beam-search strategy ï¬ nds a transla- tion that approximately maximizes the conditional probability given by a speciï¬ c model. It builds the translation from left-to-right and keeps a ï¬ xed number (beam) of translation candidates with the highest log-probability at each time step. For each end-of-sequence symbol that is selected among the highest scoring candidates the beam is reduced by one and the translation is stored into a ï¬ nal can- didate list. When the beam is zero, it stops the search and picks the translation with the highest log-probability (normalized by the number of tar- get words) out of the ï¬ | 1702.01806#2 | 1702.01806#4 | 1702.01806 | [
"1605.03209"
]
|
1702.01806#4 | Beam Search Strategies for Neural Machine Translation | nal candidate list. # 4 Search Strategies In this section, we describe the different strategies we experimented with. In all our extensions, we ï¬ rst reduce the candidate list to the current beam size and apply on top of this one or several of the following pruning schemes. relative threshold pruning method discards those candidates that are far worse than the best active candidate. Given a pruning threshold rp and an active candidate list C, a candidate cand â C is discarded if: score(cand) â ¤ rp â max câ C {score(c)} (1) Absolute Threshold Pruning. Instead of taking the relative difference of the scores into ac- count, we just discard those candidates that are worse by a speciï¬ c threshold than the best active candidate. Given a pruning threshold ap and an active candidate list C, a candidate cand â C is discarded if: score(cand) â ¤ max câ C {score(c)} â ap (2) Relative Local Threshold Pruning. In this prun- ing approach, we only consider the score scorew of the last generated word and not the total score which also include the scores of the previously generated words. Given a pruning threshold rpl and an active candidate list C, a candidate cand â C is discarded if: scorew(cand) â ¤ rpl â max câ C {scorew(c)} (3) Maximum Candidates per Node We observed that at each time step during the decoding process, most of the partial hypotheses share the same predecessor words. To introduce more diversity, we allow only a ï¬ xed number of candidates with the same history at each time step. Given a maximum candidate threshold mc and an active candidate list C, a candidate cand â C is discarded if already mc better scoring partial hyps with the same history are in the candidate list. | 1702.01806#3 | 1702.01806#5 | 1702.01806 | [
"1605.03209"
]
|
1702.01806#5 | Beam Search Strategies for Neural Machine Translation | # 5 Experiments For the Germanâ English translation task, we train an NMT system based on the WMT 2016 training data (Bojar et al., 2016) (3.9M parallel sentences). For the Chineseâ English experi- ments, we use an NMT system trained on 11 mil- lion sentences from the BOLT project. In all our experiments, we use our in-house attention-based NMT implementation which is For similar Germanâ English, we use sub-word units ex- tracted by byte pair encoding (Sennrich et al., 2015) instead of words which shrinks the vocabu- lary to 40k sub-word symbols for both source and target. For Chineseâ English, we limit our vocab- ularies to be the top 300K most frequent words for both source and target language. Words not in these vocabularies are converted into an unknown token. During translation, we use the alignments (from the attention mechanism) to replace the un- known tokens either with potential targets (ob- tained from an IBM Model-1 trained on the paral- lel data) or with the source word itself (if no target was found) (Mi et al., 2016). | 1702.01806#4 | 1702.01806#6 | 1702.01806 | [
"1605.03209"
]
|
1702.01806#6 | Beam Search Strategies for Neural Machine Translation | We use an embed- ding dimension of 620 and ï¬ x the RNN GRU lay- 28 25 27.5 20 27 15 U E L B 26.5 10 26 5 25.5 BLEU average fan out 0 5 10 15 beam size 20 25 e c n e t n e s r e p t u o n a f e g a r e v a Figure 1: Germanâ English: | 1702.01806#5 | 1702.01806#7 | 1702.01806 | [
"1605.03209"
]
|
1702.01806#7 | Beam Search Strategies for Neural Machine Translation | Original beam- search strategy with different beam sizes on new- stest2014. 27.4 5 27.2 4.5 27 26.8 4 26.6 3.5 U E L B 26.4 3 26.2 2.5 26 25.8 BLEU average fan out 2 25.6 1.5 25.4 1 0 0.2 0.4 0.6 0.8 1 relative pruning, beam size = 5 e c n e t n e s r e p t u o n a f e g a r e v a Figure 2: | 1702.01806#6 | 1702.01806#8 | 1702.01806 | [
"1605.03209"
]
|
1702.01806#8 | Beam Search Strategies for Neural Machine Translation | Germanâ English: Different values of relative pruning measured on newstest2014. ers to be of 1000 cells each. For the training proce- dure, we use SGD (Bishop, 1995) to update model parameters with a mini-batch size of 64. The train- ing data is shufï¬ ed after each epoch. We measure the decoding speed by two num- bers. First, we compare the actual speed relative to the same setup without any pruning. Secondly, we measure the average fan out per time step. For each time step, the fan out is deï¬ | 1702.01806#7 | 1702.01806#9 | 1702.01806 | [
"1605.03209"
]
|
1702.01806#9 | Beam Search Strategies for Neural Machine Translation | ned as the num- ber of candidates we expand. Fan out has an up- per bound of the size of the beam, but can be de- creased either due to early stopping (we reduce the beam every time we predict a end-of-sentence symbol) or by the proposed pruning schemes. For each pruning technique, we run the experiments with different pruning thresholds and chose the largest threshold that did not degrade the transla- tion performance based on a selection set. In Figure 1, you can see the Germanâ English translation performance and the average fan out per sentence for different beam sizes. Based on this experiment, we decided to run our prun- ing experiments for beam size 5 and 14. The Germanâ English results can be found in Table 1. | 1702.01806#8 | 1702.01806#10 | 1702.01806 | [
"1605.03209"
]
|
1702.01806#10 | Beam Search Strategies for Neural Machine Translation | By using the combination of all pruning tech- niques, we can speed up the decoding process by 13% for beam size 5 and by 43% for beam size 14 without any drop in performance. The rela- tive pruning technique is the best working one for beam size 5 whereas the absolute pruning tech- nique works best for a beam size 14. In Figure 2 the decoding speed with different relative prun- ing threshold for beam size 5 are illustrated. Set- ting the threshold higher than 0.6 hurts the trans- lation performance. A nice side effect is that it has become possible to decode without any ï¬ | 1702.01806#9 | 1702.01806#11 | 1702.01806 | [
"1605.03209"
]
|
1702.01806#11 | Beam Search Strategies for Neural Machine Translation | x beam size when we apply pruning. Nevertheless, the de- coding speed drops while the translation perfor- mance did not change. Further, we looked at the number of search errors introduced by our prun- ing schemes (number of times we prune the best scoring hypothesis). 5% of the sentences change due to search errors for beam size 5 and 9% of the sentences change for beam size 14 when using all four pruning techniques together. The Chineseâ English translation results can be found in Table 2. We can speed up the decoding process by 10% for beam size 5 and by 24% for beam size 14 without loss in translation quality. In addition, we measured the number of search errors introduced by pruning the search. Only 4% of the sentences change for beam size 5, whereas 22% of the sentences change for beam size 14. # 6 Conclusion The original beam search decoder used in Neu- ral Machine Translation is very simple. It gen- erated translations from left-to-right while look- ing at a ï¬ x number (beam) of candidates from the last time step only. By setting the beam size large enough, we ensure that the best translation per- formance can be reached with the drawback that many candidates whose scores are far away from the best are also explored. In this paper, we in- troduced several pruning techniques which prune candidates whose scores are far away from the best one. By applying a combination of absolute and relative pruning schemes, we speed up the decoder by up to 43% without losing any translation qual- ity. Putting more diversity into the decoder did not improve the translation quality. | 1702.01806#10 | 1702.01806#12 | 1702.01806 | [
"1605.03209"
]
|
1702.01806#12 | Beam Search Strategies for Neural Machine Translation | beam speed avg fan out tot fan out newstest2014 newstest2015 per sent BLEU TER BLEU TER size 55.4 1 53.7 5 53.8 5 53.7 5 53.8 5 53.8 5 53.8 5 53.5 14 53.4 14 53.5 14 53.4 14 53.4 14 53.4 14 53.3 - pruning per sent 1.00 4.54 3.71 4.11 4.25 4.54 3.64 12.19 10.38 9.49 10.27 12.21 8.44 28.46 up - - 6% 5% 5% 0% 13% - 10% 29% 24% 1% 43% - 25 122 109 116 118 126 101 363 315 279 306 347 260 979 56.8 54.6 54.7 54.6 54.7 54.6 54.6 54.3 54.3 54.3 54.4 54.4 54.5 54.4 25.5 27.3 27.3 27.3 27.3 27.4 27.3 27.6 27.6 27.6 27.6 27.6 27.6 27.6 26.1 27.4 27.3 27.4 27.4 27.5 27.3 27.6 27.6 27.6 27.7 27.7 27.6 27.6 no pruning no pruning rp=0.6 ap=2.5 rpl=0.02 mc=3 rp=0.6,ap=2.5,rpl=0.02,mc=3 no pruning rp=0.3 ap=2.5 rpl=0.3 mc=3 rp=0.3,ap=2.5,rpl=0.3,mc=3 rp=0.3,ap=2.5,rpl=0.3,mc=3 | 1702.01806#11 | 1702.01806#13 | 1702.01806 | [
"1605.03209"
]
|
1702.01806#13 | Beam Search Strategies for Neural Machine Translation | Table 1: Results Germanâ English: relative pruning(rp), absolute pruning(ap), relative local pruning(rpl) and maximum candidates per node(mc). Average fan out is the average number of candidates we keep at each time step during decoding. pruning beam speed avg fan out tot fan out MT08 nw MT08 wb per sent BLEU TER BLEU TER size 27.3 61.7 26.0 60.3 1 34.4 57.3 30.6 58.2 5 34.4 57.3 30.6 58.2 5 34.3 57.3 30.6 58.2 5 34.4 57.5 30.6 58.3 5 34.4 57.4 30.7 58.2 5 34.3 57.3 30.6 58.2 5 35.3 57.1 31.2 57.8 14 35.2 57.2 31.2 57.8 14 35.2 56.9 31.1 57.9 14 35.3 57.2 31.1 57.9 14 35.3 56.9 31.1 57.8 14 35.3 56.9 31.1 57.8 14 35.2 57.3 31.1 57.9 - up - - 1% 4% 1% 0% 10% - 3% 14% 10% 0% 24% - per sent 1.00 4.36 4.32 4.26 4.35 4.37 3.92 11.96 11.62 10.15 10.93 11.98 8.62 38.76 29 137 134 132 135 139 121 376 362 321 334 378 306 1411 no pruning no pruning rp=0.2 ap=5 rpl=0.01 mc=3 rp=0.2,ap=5,rpl=0.01,mc=3 no pruning rp=0.2 ap=2.5 rpl=0.3 mc=3 rp=0.2,ap=2.5,rpl=0.3,mc=3 rp=0.2,ap=2.5,rpl=0.3,mc=3 | 1702.01806#12 | 1702.01806#14 | 1702.01806 | [
"1605.03209"
]
|
1702.01806#14 | Beam Search Strategies for Neural Machine Translation | Table 2: Results Chineseâ English: relative pruning(rp), absolute pruning(ap), relative local pruning(rpl) and maximum candidates per node(mc). # References D. Bahdanau, K. Cho, and Y. Bengio. 2014. Neural machine translation by jointly learning to align and translate. ArXiv e-prints . Christopher M Bishop. 1995. Neural networks for pat- tern recognition. Oxford university press. 2016 conference on machine translation (wmt16). Proceedings of WMT . Nicolas Boulanger-Lewandowski, Yoshua Bengio, and Pascal Vincent. 2013. Audio chord recognition with recurrent neural networks. | 1702.01806#13 | 1702.01806#15 | 1702.01806 | [
"1605.03209"
]
|
1702.01806#15 | Beam Search Strategies for Neural Machine Translation | In ISMIR. Citeseer, pages 335â 340. Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, An- tonio Jimeno Yepes, Philipp Koehn, Varvara Lo- gacheva, Christof Monz, et al. 2016. Findings of the Alex Graves. 2012. Sequence transduction with arXiv preprint recurrent neural networks. arXiv:1211.3711 . | 1702.01806#14 | 1702.01806#16 | 1702.01806 | [
"1605.03209"
]
|
1702.01806#16 | Beam Search Strategies for Neural Machine Translation | Xiaoguang Hu, Wei Li, Xiang Lan, Hua Wu, and Haifeng Wang. 2015. Improved beam search with constrained softmax for nmt. Proceedings of MT Summit XV page 297. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large tar- In get vocabulary for neural machine translation. Proceedings of ACL. Beijing, China, pages 1â 10. | 1702.01806#15 | 1702.01806#17 | 1702.01806 | [
"1605.03209"
]
|
1702.01806#17 | Beam Search Strategies for Neural Machine Translation | Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent In Proceedings of continuous translation models. the 2013 Conference on Empirical Methods in Nat- ural Language Processing. Association for Compu- tational Linguistics, Seattle. Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Pro- ceedings of ACL. Beijing, China, pages 11â 19. Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Vocabulary manipulation for neural machine trans- lation. arXiv preprint arXiv:1605.03209 . Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909 . Ilya Sutskever, Oriol Vinyals, and Quoc V. | 1702.01806#16 | 1702.01806#18 | 1702.01806 | [
"1605.03209"
]
|
1702.01806#18 | Beam Search Strategies for Neural Machine Translation | Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada. pages 3104â 3112. http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Googleâ s neural ma- Macherey, et al. 2016. chine translation system: Bridging the gap between arXiv preprint human and machine translation. arXiv:1609.08144 . Richard Zens, Daisy Stanton, and Peng Xu. 2012. A systematic comparison of phrase table pruning tech- In Proceedings of the 2012 Joint Confer- niques. ence on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguis- tics, pages 972â 983. | 1702.01806#17 | 1702.01806 | [
"1605.03209"
]
|
|
1701.08718#0 | Memory Augmented Neural Networks with Wormhole Connections | 7 1 0 2 n a J 0 3 ] G L . s c [ 1 v 8 1 7 8 0 . 1 0 7 1 : v i X r a Memory Augmented Neural Networks with Wormhole Connections # Memory Augmented Neural Networks with Wormhole Connections # Caglar Gulcehre Montreal Institute for Learning Algorithms Universite de Montreal Montreal, Canada [email protected] # Sarath Chandar Montreal Institute for Learning Algorithms Universite de Montreal Montreal, Canada [email protected] # Yoshua Bengio Montreal Institute for Learning Algorithms Universite de Montreal Montreal, Canada [email protected] # Abstract | 1701.08718#1 | 1701.08718 | [
"1609.01704"
]
|
|
1701.08718#1 | Memory Augmented Neural Networks with Wormhole Connections | Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the eï¬ ects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more eï¬ ectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more eï¬ cient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on diï¬ erent long-term dependency tasks and report competitive results in all of them. # 1. Introduction | 1701.08718#0 | 1701.08718#2 | 1701.08718 | [
"1609.01704"
]
|
1701.08718#2 | Memory Augmented Neural Networks with Wormhole Connections | Recurrent Neural Networks (RNNs) are neural network architectures that are designed to handle temporal dependencies in sequential prediction problems. However it is well known that RNNs suï¬ er from the issue of vanishing gradients as the length of the sequence and the dependencies increases (Hochreiter, 1991; Bengio et al., 1994). Long Short Term 1 Gulcehre, Chandar, and Bengio Memory (LSTM) units (Hochreiter and Schmidhuber, 1997) were proposed as an alternative architecture which can handle long range dependencies better than a vanilla RNN. | 1701.08718#1 | 1701.08718#3 | 1701.08718 | [
"1609.01704"
]
|
1701.08718#3 | Memory Augmented Neural Networks with Wormhole Connections | A simpliï¬ ed version of LSTM unit called Gated Recurrent Unit (GRU), proposed in (Cho et al., 2014), has proven to be successful in a number of applications (Bahdanau et al., 2015; Xu et al., 2015; Trischler et al., 2016; Kaiser and Sutskever, 2015; Serban et al., 2016). Even though LSTMs and GRUs attempt to solve the vanishing gradient problem, the memory in both architectures is stored in a single hidden vector as it is done in an RNN and hence accessing the information too far in the past can still be diï¬ | 1701.08718#2 | 1701.08718#4 | 1701.08718 | [
"1609.01704"
]
|
1701.08718#4 | Memory Augmented Neural Networks with Wormhole Connections | cult. In other words, LSTM and GRU models have a limited ability to perform a search through its past memories when it needs to access a relevant information for making a prediction. Extending the capabilities of neural networks with a memory component has been explored in the literature on diï¬ erent applications with diï¬ erent architectures (Weston et al., 2015; Graves et al., 2014; Joulin and Mikolov, 2015; Grefenstette et al., 2015; Sukhbaatar et al., 2015; Bordes et al., 2015; Chandar et al., 2016; Gulcehre et al., 2016; Graves et al., 2016; Rae et al., 2016). Memory augmented neural networks (MANN) such as neural Turing machines (NTM) (Graves et al., 2014; Rae et al., 2016), dynamic NTM (D-NTM) (Gulcehre et al., 2016), and Diï¬ erentiable Neural Computers (DNC) (Graves et al., 2016) use an external memory (usually a matrix) to store information and the MANNâ s controller can learn to both read from and write into the external memory. As we show here, it is in general possible to use particular MANNs to explicitly store the previous hidden states of an RNN in the memory and that will provide shortcut connections through time, called here wormhole connections, to look into the history of the states of the RNN controller. Learning to read and write into an external memory by using neural networks gives the model more freedom or ï¬ exibility to retrieve information from its past, forget or store new information into the memory. However, if the addressing mechanism for read and/or write operations are continuous (like in the NTM and continuous D-NTM), then the access may be too diï¬ use, especially early on during training. This can hurt especially the writing operation, since a diï¬ used write operation will overwrite a large fraction of the memory at each step, yielding fast vanishing of the memories (and gradients). | 1701.08718#3 | 1701.08718#5 | 1701.08718 | [
"1609.01704"
]
|
1701.08718#5 | Memory Augmented Neural Networks with Wormhole Connections | On the other hand, discrete addressing, as used in the discrete D-NTM, should be able to perform this search through the past, but prevents us from using straight backpropagation for learning how to choose the address. We investigate the ï¬ ow of the gradients and how the wormhole connections introduced by the controller eï¬ ects it. Our results show that the wormhole connections created by the controller of the MANN can signiï¬ cantly reduce the eï¬ ects of the vanishing gradients by shortening the paths that the signal needs to travel between the dependencies. | 1701.08718#4 | 1701.08718#6 | 1701.08718 | [
"1609.01704"
]
|
1701.08718#6 | Memory Augmented Neural Networks with Wormhole Connections | We also discuss how the MANNs can generalize to the sequences longer than the ones seen during the training. In a discrete D-NTM, the controller must learn to read from and write into the external memory by itself and additionally, it should also learn the reader/writer synchronization. This can make the learning to be more challenging. In spite of this diï¬ culty, Gulcehre et al. (2016) reported that the discrete D-NTM can learn faster than the continuous D-NTM on some of the bAbI tasks. We provide a formal analysis of gradient ï¬ ow in MANNs based on discrete addressing and justify this result. In this paper, we also propose a new MANN based on discrete addressing called TARDIS (Temporal Automatic Relation Discovery in Sequences). In TARDIS, memory access is based on tying the write and read heads of | 1701.08718#5 | 1701.08718#7 | 1701.08718 | [
"1609.01704"
]
|
1701.08718#7 | Memory Augmented Neural Networks with Wormhole Connections | 2 # Memory Augmented Neural Networks with Wormhole Connections the model after memory is ï¬ lled up. When the memory is not full, the write head store information in memory in the sequential order. The main characteristics of TARDIS are as follows, TARDIS is a simple memory aug- mented neural network model which can represent long-term dependencies eï¬ ciently by using a external memory of small size. TARDIS represents the dependencies between the hidden states inside the memory. We show both theoretically and experimentally that TARDIS ï¬ xes to a large extent the problems related to long-term dependencies. Our model can also store sub-sequences or sequence chunks into the memory. As a consequence, the controller can learn to represent the high-level temporal abstractions as well. TARDIS performs well on several structured output prediction tasks as veriï¬ ed in our experiments. The idea of using external memory with attention can be justiï¬ ed with the concept of mental-time travel which humans do occasionally to solve daily tasks. In particular, in the cognitive science literature, the concept of chronesthesia is known to be a form of consciousness which allows human to think about time subjectively and perform mental time-travel (Tulving, 2002). TARDIS is inspired by this ability of humans which allows one to look up past memories and plan for the future using the episodic memory. | 1701.08718#6 | 1701.08718#8 | 1701.08718 | [
"1609.01704"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.