doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1603.09320
28
A simple choice for the optimal mL is 1/ln(M), this cor- responds to the skip list parameter p=1/M with an aver- age single element overlap between the layers. Simula- tions done on an Intel Core i7 5930K CPU show that the proposed selection of mL is a reasonable choice (see Fig. 3 for data on 10M random d=4 vectors). In addition, the plot demonstrates a massive speedup on low dimensional data when increasing the mL from zero and the effect of using the heuristic for selection of the graph connections. It is hard to expect the same behavior for high dimen- sional data since in this case the k-NN graph already has 5 6 IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT ID # Query time, ms 0,04 0.01 0.0
1603.09320#28
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09025
29
D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015. Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. Neural Networks, IEEE Transactions on, 1994. K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv:1406.1078, 2014. Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural net- works. arXiv:1609.01704, 2016. A. Graves. Generating sequences with recurrent neural networks. arXiv:1308.0850, 2013. David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv:1609.09106, 2016. K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. Teaching machines to read and comprehend. In NIPS, 2015.
1603.09025#29
Recurrent Batch Normalization
We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.
http://arxiv.org/pdf/1603.09025
Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville
cs.LG
null
null
cs.LG
20160330
20170228
[ { "id": "1609.01704" }, { "id": "1609.09106" }, { "id": "1602.08210" }, { "id": "1504.00941" }, { "id": "1606.01305" }, { "id": "1511.06464" }, { "id": "1604.03640" }, { "id": "1512.02595" }, { "id": "1607.06450" }, { "id": "1502.03044" } ]
1603.09320
29
5 6 IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT ID # Query time, ms 0,04 0.01 0.0 random vectors, |e Simple neighbors |e Heuristic Autoselect 05 10 15 2.0 145 eel 0,14 4a} £ [T00K random vectors, d=1024 gos BM SIFT. 5 I-20, m,.,,=40, = £ FRecalz0 3 1-NN g Recal=0 se |-s— Simple neighbors £012 3 |= Simple 5 3 Wise 2 2 [He— 5 3 Le Heuristic E 3 g 3 3 ° 2 Sort 3 142 “4 0,09 1, | 0,0 05 1,0 15 2,0 0.0 05 10 m, 6-126 at 3 # 1-NN # Neighbors Heuristic | 15 1 20 Fig. 3. Plots for query time vs mL parameter for 10M random vectors with d=4. The au- toselected value 1/ln(M) for mL is shown by an arrow. Fig. 4. Plots for query time vs mL parame- ter for 100k random vectors with d=1024. The autoselected value 1/ln(M) for mL is shown by an arrow.
1603.09320#29
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09025
30
S. Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Master’s thesis, 1991. S. Hochreiter and J Schmidhuber. Long short-term memory. Neural computation, 1997. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. abs/1502.03167, 2015. D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014. D Krueger and R. Memisevic. Regularizing rnns by stabilizing activations. ICLR, 2016. 9 Published as a conference paper at ICLR 2017 David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rose- mary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, and Aaron Courville. Zoneout: Regularizing rnns by randomly preserving hidden activations. arXiv:1606.01305, 2016. C. Laurent, G. Pereyra, P. Brakel, Y. Zhang, and Y. Bengio. Batch normalized recurrent neural networks. ICASSP, 2016.
1603.09025#30
Recurrent Batch Normalization
We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.
http://arxiv.org/pdf/1603.09025
Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville
cs.LG
null
null
cs.LG
20160330
20170228
[ { "id": "1609.01704" }, { "id": "1609.09106" }, { "id": "1602.08210" }, { "id": "1504.00941" }, { "id": "1606.01305" }, { "id": "1511.06464" }, { "id": "1604.03640" }, { "id": "1512.02595" }, { "id": "1607.06450" }, { "id": "1502.03044" } ]
1603.09320
30
Fig. 5. Plots for query time vs mL parameter for 5M SIFT learn dataset. The autoselected value 1/ln(M) for mL is shown by an arrow. very short greedy algorithm paths [28]. Surprisingly, in- creasing the mL from zero leads to a measurable increase in speed on very high dimensional data (100k dense ran- dom d=1024 vectors, see plot in Fig. 4), and does not in- troduce any penalty for the Hierarchical NSW approach. For real data such as SIFT vectors [1] (which have com- plex mixed structure), the performance improvement by increasing the mL is higher, but less prominent at current settings compared to improvement from the heuristic (see Fig. 5 for 1-NN search performance on 5 million 128- dimensional SIFT vectors from the learning set of BIG- ANN [13]).
1603.09320#30
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09025
31
C. Laurent, G. Pereyra, P. Brakel, Y. Zhang, and Y. Bengio. Batch normalized recurrent neural networks. ICASSP, 2016. Quoc V Le, N. Jaitly, and G. Hinton. A simple way to initialize recurrent networks of rectified linear units. arXiv:1504.00941, 2015. Qianli Liao and Tomaso Poggio. Bridging the gaps between residual learning, recurrent neural networks and visual cortex. arXiv:1604.03640, 2016. M. Mahoney. Large text compression benchmark. 2009. M. P. Marcus, M. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of english: The penn treebank. Comput. Linguist., 1993. J. Martens and I. Sutskever. Learning recurrent neural networks with hessian-free optimization. In ICML, 2011. T. Mikolov, I. Sutskever, A. Deoras, H. Le, S. Kombrink, and J. Cernocky. Subword language modeling with neural networks. preprint, 2012. Yann Ollivier. Persistent contextual neural networks for learning symbolic data sequences. CoRR, abs/1306.0514, 2013.
1603.09025#31
Recurrent Batch Normalization
We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.
http://arxiv.org/pdf/1603.09025
Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville
cs.LG
null
null
cs.LG
20160330
20170228
[ { "id": "1609.01704" }, { "id": "1609.09106" }, { "id": "1602.08210" }, { "id": "1504.00941" }, { "id": "1606.01305" }, { "id": "1511.06464" }, { "id": "1604.03640" }, { "id": "1512.02595" }, { "id": "1607.06450" }, { "id": "1502.03044" } ]
1603.09320
31
Selection of the Mmax0 (the maximum number of con- nections that an element can have in the zero layer) also has a strong influence on the search performance, espe- cially in case of high quality (high recall) search. Simula- tions show that setting Mmax0 to M (this corresponds to k- NN graphs on each layer if the neighbors selection heuris- tic is not used) leads to a very strong performance penalty at high recall. Simulations also suggest that 2∙M is a good choice for Mmax0: setting the parameter higher leads to performance degradation and excessive memory usage. In Fig. 6 there are presented results of search performance for the 5M SIFT learn dataset depending on the Mmax0 pa- rameter (done on an Intel Core i5 2400 CPU). The sug- gested value gives performance close to optimal at differ- ent recalls. In all of the considered cases, use of the heuristic for proximity graph neighbors selection (alg. 4) leads to a higher or similar search performance compared to the
1603.09320#31
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09025
32
Yann Ollivier. Persistent contextual neural networks for learning symbolic data sequences. CoRR, abs/1306.0514, 2013. Marius Pachitariu and Maneesh Sahani. Regularization and nonlinearities for neural language mod- els: when are they needed? arXiv:1301.5650, 2013. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. arXiv:1211.5063, 2012. H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 2000. The Theano Development Team et al. Theano: A Python framework for fast computation of mathe- matical expressions. arXiv e-prints, abs/1605.02688, May 2016. T. Tieleman and G. Hinton. Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
1603.09025#32
Recurrent Batch Normalization
We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.
http://arxiv.org/pdf/1603.09025
Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville
cs.LG
null
null
cs.LG
20160330
20170228
[ { "id": "1609.01704" }, { "id": "1609.09106" }, { "id": "1602.08210" }, { "id": "1504.00941" }, { "id": "1606.01305" }, { "id": "1511.06464" }, { "id": "1604.03640" }, { "id": "1512.02595" }, { "id": "1607.06450" }, { "id": "1502.03044" } ]
1603.09320
32
In all of the considered cases, use of the heuristic for proximity graph neighbors selection (alg. 4) leads to a higher or similar search performance compared to the naïve connection to the nearest neighbors (alg. 3). The effect is the most prominent for low dimensional data, at high recall for mid-dimensional data and for the case of highly clustered data (ideologically discontinuity can be regarded as a local low dimensional feature), see the comparison in Fig. 7 (Core i5 2400 CPU). When using the closest neighbors as connections for the proximity graph, the Hierarchical NSW algorithm fails to achieve a high recall for clustered data because the search stucks at the clusters boundaries. Contrary, when the heuristic is used (together with candidates’ extension, line 3 in Alg. 4), clustering leads to even higher performance. For uniform and very high dimensional data there is a little difference between the neighbors selecting methods (see Fig. 4), pos- sibly due to the fact that in this case almost all of the nearest neighbors are selected by the heuristic.
1603.09320#32
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09025
33
Bart van Merriënboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde- Farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning. CoRR, abs/1506.00619, 2015. URL http://arxiv.org/abs/1506.00619. K. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. arXiv:1502.03044, 2015. L. Yao, A. Torabi, K. Cho, N. Ballas, C. Pal, H. Larochelle, and A. Courville. Describing videos by exploiting temporal structure. In ICCV, 2015. S. Zhang, Y. Wu, T. Che, Z. Lin, R. Memisevic, R. Salakhutdinov, and Y. Bengio. Architectural complexity measures of recurrent neural networks. arXiv:1602.08210, 2016. 10 Published as a conference paper at ICLR 2017 # A CONVERGENCE OF POPULATION STATISTICS
1603.09025#33
Recurrent Batch Normalization
We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.
http://arxiv.org/pdf/1603.09025
Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville
cs.LG
null
null
cs.LG
20160330
20170228
[ { "id": "1609.01704" }, { "id": "1609.09106" }, { "id": "1602.08210" }, { "id": "1504.00941" }, { "id": "1606.01305" }, { "id": "1511.06464" }, { "id": "1604.03640" }, { "id": "1512.02595" }, { "id": "1607.06450" }, { "id": "1502.03044" } ]
1603.09320
33
The only meaningful construction parameter left for the user is M. A reasonable range of M is from 5 to 48. Simulations show that smaller M generally produces bet- ter results for lower recalls and/or lower dimensional data, while bigger M is better for high recall and/or high dimensional data (see Fig. 8 for illustration, Core i5 2400 CPU). The parameter also defines the memory consump- tion of the algorithm (which is proportional to M), so it should be selected with care. Selection of the efConstruction parameter is straight- forward. As it was suggested in [26] it has to be large enough to produce K-ANNS recall close to unity during the construction process (0.95 is enough for the most use- cases). And just like in [26], this parameter can possibly [5M SIFT, d=128, M=20, \m,=0.33, 10-NN |= Recall=0.4 |e Recall=0.8 [4 Recall=0.94 on tel ft | | Vaaegectetenstet 989 Autoselect Query time, ms 0 20 40 60 80 100 120 140 160 180 200 M, Imaxo
1603.09320#33
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09025
34
10 Published as a conference paper at ICLR 2017 # A CONVERGENCE OF POPULATION STATISTICS mean of recurrent term mean of cell state 20 variance of recurrent term 15) J 1.0 4 0.5 4 0.0 . {¢) 10 20 30 40 50 [¢) 10 20 30 40 50 time steps time steps Figure 5: Convergence of population statistics to stationary distributions on the Penn Treebank task. The horizontal axis denotes RNN time. Each curve corresponds to a single hidden unit. Only a random subset of units is shown. See Section 3 for discussion. # B SENSITIVITY TO INITIALIZATION OF γ In Section 4 we investigated the effect of initial γ on gradient flow. To show the practical implica- tions of this, we performed several experiments on the pMNIST and Penn Treebank benchmarks. The resulting performances are shown in Figure 6. The pMNIST training curves confirm that higher initial values of γ are detrimental to the optimiza- tion of the model. For the Penn Treebank task however, the effect is gone. We believe this is explained by the difference in the nature of the two tasks. For pMNIST, the model absorbs the input sequence and only at the end of the sequence does it make a prediction on which it receives feedback. Learning from this feedback requires propagating the gradient all the way back through the sequence.
1603.09025#34
Recurrent Batch Normalization
We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.
http://arxiv.org/pdf/1603.09025
Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville
cs.LG
null
null
cs.LG
20160330
20170228
[ { "id": "1609.01704" }, { "id": "1609.09106" }, { "id": "1602.08210" }, { "id": "1504.00941" }, { "id": "1606.01305" }, { "id": "1511.06464" }, { "id": "1604.03640" }, { "id": "1512.02595" }, { "id": "1607.06450" }, { "id": "1502.03044" } ]
1603.09320
34
oa 02 0.8 10M random vectors, d=10 M=16, 10-NN —=— baseline - no clusters —e— heuristic - no clusters —4— baseline - 100 clusters —r— heuristic - 100 clusters 0.01 o4 10° [5M SIFT, d=128,| 10-NN |e M=2 10° |e M=3. Recall error (1-recall) [Xe M=40 10° 001 on H Fig. 6. Plots for query time vs Mmax0 pa- rameter for 5M SIFT learn dataset. The autoselected value 2∙M for Mmax0 is shown by an arrow. # Query time, ms Fig. 7. Effect of the method of neighbor selections (baseline corresponds to alg. 3, heuristic to alg. 4) on clustered (100 ran- dom isolated clusters) and non-clustered d=10 random vector data. # Query time, ms Fig. 8. Plots for recall error vs query time for different parameters of M for Hierar- chical NSW on 5M SIFT learn dataset. AUTHOR ET AL.: TITLE
1603.09320#34
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09025
35
In the Penn Treebank task on the other hand, the model makes a prediction at each timestep. At each step of the backward pass, a fresh learning signal is added to the backpropagated gradient. Essentially, the model is able to get off the ground by picking up short-term dependencies. This fails on pMNIST wich is dominated by long-term dependencies (Arjovsky et al., 2015). # C TEACHING MACHINES TO READ AND COMPREHEND: TASK SETUP We evaluate the models on the question answering task using the CNN corpus (Hermann et al., 2015), with placeholders for the named entities. We follow a similar preprocessing pipeline as Her- mann et al. (2015). During training, we randomly sample the examples with replacement and shuffle the order of the placeholders in each text inside the minibatch. We use a vocabulary of 65829 words. We deviate from Hermann et al. (2015) in order to save computation: we use only the 4 most relevant sentences from the description, as identified by a string matching procedure. Both the training and validation sets are preprocessed in this way. Due to imprecision this heuristic sometimes strips the 11 Published as a conference paper at ICLR 2017
1603.09025#35
Recurrent Batch Normalization
We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.
http://arxiv.org/pdf/1603.09025
Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville
cs.LG
null
null
cs.LG
20160330
20170228
[ { "id": "1609.01704" }, { "id": "1609.09106" }, { "id": "1602.08210" }, { "id": "1504.00941" }, { "id": "1606.01305" }, { "id": "1511.06464" }, { "id": "1604.03640" }, { "id": "1512.02595" }, { "id": "1607.06450" }, { "id": "1502.03044" } ]
1603.09320
35
Fig. 8. Plots for recall error vs query time for different parameters of M for Hierar- chical NSW on 5M SIFT learn dataset. AUTHOR ET AL.: TITLE |-=— 4X Xeon E5-4650 v2 (4x10 cores) 50 |+®— Core i7-6850K (6 cores+HT) Build time, minutes 075 10. 5 2025S DSO Thread count # Query time, ms 02 0 . | | . # s 2 10M SIFT, d=128, M=16, # NN # Seen 4 6 8 10 12 Random vectors, d=4) IM=6, M,,.=12, 1-NN = Recall |e Recall=0.99 + Recall=0.999 Average ef to reach target recall cen ence enna # Dataset size # Build time, minutes Fig. 9. Construction time for Hierarchical NSW on 10M SIFT dataset for different numbers of threads on two CPUs. Fig. 10. Plots of the query time vs construc- tion time tradeoff for Hierarchical NSW on 10M SIFT dataset. Fig. 11. Plots of the ef parameter required to get fixed accuracies vs the dataset size for d=4 random vector data. be auto-configured by using sample data.
1603.09320#35
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09025
36
11 Published as a conference paper at ICLR 2017 25 Permuted MNIST train 25 Permuted MNIST valid — gamma 0.10 — "gamma 0.10 — gamma 0.30 — gamma 0.30 20 — gamma 0.50 2.0 — gamma 0.50 — gamma 0.70 — gamma 0.70 > _ > — is gamma 1.00 is gamma 1.00 € € 5 5 6 1.0 6 10 0s 0.5 0.0 0.0 0 10000 20000 +=« 30000 += 40000~=—«50000 0 10000 20000 +~«30000 += 40000:~=—«50000 training steps training steps PTB train PTB valid 1.10 1.10 — gamma 0.10 — gamma 0.10 1.05 — gamma 0.30 — gamma 0.30 — gamma 0.50 1.08 — gamma 0.50 5 — gamma 0.70 5 — gamma 0.70 = 1.00 — gamma 1.00 o — gamma 1.00 © & 1.06 S 0.95 s a & 1.04 £090 g FI FI 0.85 1.02 0.80 1.00 0 5000 10000 15000 0 5000 10000 15000 training steps training steps Figure 6: Training curves on pMNIST and Penn Treebank for various initializations of γ.
1603.09025#36
Recurrent Batch Normalization
We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.
http://arxiv.org/pdf/1603.09025
Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville
cs.LG
null
null
cs.LG
20160330
20170228
[ { "id": "1609.01704" }, { "id": "1609.09106" }, { "id": "1602.08210" }, { "id": "1504.00941" }, { "id": "1606.01305" }, { "id": "1511.06464" }, { "id": "1604.03640" }, { "id": "1512.02595" }, { "id": "1607.06450" }, { "id": "1502.03044" } ]
1603.09320
36
Fig. 11. Plots of the ef parameter required to get fixed accuracies vs the dataset size for d=4 random vector data. be auto-configured by using sample data. The construction process can be easily and efficiently parallelized with only few synchronization points (as demonstrated in Fig. 9) and no measurable effect on index quality. Construction speed/index quality tradeoff is con- trolled via the efConstruction parameter. The tradeoff between the search time and the index construction time is presented in Fig. 10 for a 10M SIFT dataset and shows that a reasonable quality index can be constructed for efConstruction=100 on a 4X 2.4 GHz 10-core Xeon E5- 4650 v2 CPU server in just 3 minutes. Further increase of the efConstruction leads to little extra performance but in exchange of significantly longer construction time. # 4.2 Complexity analysis 4.2.1 Search complexity The complexity scaling of a single search can be strictly analyzed under the assumption that we build exact De- launay graphs instead of the approximate ones. Suppose we have found the closest element on some layer (this is guaranteed by having the Delaunay graph) and then de- scended to the next layer. One can show that the average number of steps before we find the closest element in the layer is bounded by a constant.
1603.09320#36
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09025
37
Figure 6: Training curves on pMNIST and Penn Treebank for various initializations of γ. answers from the passage, putting an upper bound of 57% on the validation accuracy that can be achieved. For the reported performances, the first three models (LSTM, BN-LSTM and BN-everywhere) are trained using the exact same hyperparameters, which were chosen because they work well for the baseline. The hidden state is composed of 240 units. We use stochastic gradient descent on mini- batches of size 64, with gradient clipping at 10 and step rule determined by Adam (Kingma & Ba, 2014) with learning rate 8 × 10−5. For BN-e* and BN-e**, we use the same hyperparameters except that we reduce the learning rate to 8 × 10−4 and the minibatch size to 40. # D HYPERPARAMETER SEARCHES Table 5 reports hyperparameter values that were tried in the experiments.
1603.09025#37
Recurrent Batch Normalization
We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.
http://arxiv.org/pdf/1603.09025
Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville
cs.LG
null
null
cs.LG
20160330
20170228
[ { "id": "1609.01704" }, { "id": "1609.09106" }, { "id": "1602.08210" }, { "id": "1504.00941" }, { "id": "1606.01305" }, { "id": "1511.06464" }, { "id": "1604.03640" }, { "id": "1512.02595" }, { "id": "1607.06450" }, { "id": "1502.03044" } ]
1603.09320
37
Indeed, the layers are not correlated with the spatial positions of the data elements and, thus, when we trav- erse the graph there is a fixed probability p=exp(-mL) that the next node belongs to the upper layer. However, the search on the layer always terminates before it reaches the element which belongs to the higher layer (otherwise the search on the upper layer would have stopped on a dif- ferent element), so the probability of not reaching the tar- get on s-th step is bounded by exp(-s· mL). Thus the ex- pected number of steps in a layer is bounded by a sum of geometric progression S =1/(1-exp(-mL)), which is inde- pendent of the dataset size. If we assume that the average degree of a node in the Delaunay graph is capped by a constant C in the limit of the large dataset (this is the case for random Euclid da- ta [48], but can be in principle violated in exotic spaces), then the overall average number of distance evaluations in a layer is bounded by a constant C· S, independently of the dataset size. And since the expectation of the maximum layer index by the construction scales as O(log(N)), the overall com- plexity scaling is O(log(N)), in agreement with the simu- lations on low dimensional datasets.
1603.09320#37
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09025
38
# D HYPERPARAMETER SEARCHES Table 5 reports hyperparameter values that were tried in the experiments. (a) MNIST and pMNIST (b) Penn Treebank Learning rate: RMSProp momentum: Hidden state size: Initial γ: 1e-2, 1e-3, 1e-4 0.5, 0.9 100, 200, 400 1e-1, 3e-1, 5e-1, 7e-1, 1.0 Learning rate: Hidden state size: Batch size: Initial γ: (c) Text8 (d) Attentive Reader Learning rate: Hidden state size: 1e-1, 1e-2, 1e-3 500, 1000, 2000, 4000 Learning rate: Hidden state size: 8e-3, 8e-4, 8e-5, 8e-6 60, 120, 240, 280 1e-1, 1e-2, 2e-2, 1e-3 800, 1000, 1200, 1500, 2000 32, 64, 100, 128 1e-1, 3e-1, 5e-1, 7e-1, 1.0 Table 5: Hyperparameter values that have been explored in the experiments.
1603.09025#38
Recurrent Batch Normalization
We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.
http://arxiv.org/pdf/1603.09025
Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville
cs.LG
null
null
cs.LG
20160330
20170228
[ { "id": "1609.01704" }, { "id": "1609.09106" }, { "id": "1602.08210" }, { "id": "1504.00941" }, { "id": "1606.01305" }, { "id": "1511.06464" }, { "id": "1604.03640" }, { "id": "1512.02595" }, { "id": "1607.06450" }, { "id": "1502.03044" } ]
1603.09320
38
by the construction scales as O(log(N)), the overall com- plexity scaling is O(log(N)), in agreement with the simu- lations on low dimensional datasets. The inital assumption of having the exact Delaunay graph violates in Hierarchical NSW due to usage of ap- proximate edge selection heuristic with a fixed number of neighbors per element. Thus, to avoid stucking into a lo- cal minimum the greedy search algorithm employs a backtracking procedure on the zero layer. Simulations show that at least for low dimensional data (Fig. 11, d=4) the dependence of the required ef parameter (which de- termines the complexity via the minimal number of hops during the backtracking) to get a fixed recall saturates with the rise of the dataset size. The backtracking com- plexity is an additive term in respect to the final complex- ity, thus, as follows from the empirical data, inaccuracies of the Delaunay graph approximation do not alter the scaling.
1603.09320#38
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09025
39
Table 5: Hyperparameter values that have been explored in the experiments. For MNIST and pMNIST, the hyperparameters were varied independently. For Penn Treebank, we performed a full grid search on learning rate and hidden state size, and later performed a sensitivity 12 Published as a conference paper at ICLR 2017 analysis on the batch size and initial γ. For the text8 task and the experiments with the Attentive Reader, we carried out a grid search on the learning rate and hidden state size. The same values were tried for both the baseline and our BN-LSTM. In each case, our reported results are those of the model with the best validation performance. 13
1603.09025#39
Recurrent Batch Normalization
We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.
http://arxiv.org/pdf/1603.09025
Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville
cs.LG
null
null
cs.LG
20160330
20170228
[ { "id": "1609.01704" }, { "id": "1609.09106" }, { "id": "1602.08210" }, { "id": "1504.00941" }, { "id": "1606.01305" }, { "id": "1511.06464" }, { "id": "1604.03640" }, { "id": "1512.02595" }, { "id": "1607.06450" }, { "id": "1502.03044" } ]
1603.09320
39
Such empirical investigation of the Delaunay graph approximation resilience requires having the average number of Delaunay graph edges independent of the da- taset to evidence how well the edges are approximated with a constant number of connections in Hierarchical NSW. However, the average degree of Delaunay graph scales exponentially with the dimensionality [39]), thus for high dimensional data (e.g. d=128) the aforemen- tioned condition requires having extremely large da- tasets, making such empricial investigation unfeasible. Further analitical evidence is required to confirm whether the resilience of Delaunay graph aproximations general- izes to higher dimensional spaces. 4.2.2 Construction complexity The construction is done by iterative insertions of all ele- ments, while the insertion of an element is merely a se- quence of K-ANN-searches at different layers with a sub- sequent use of heuristic (which has fixed complexity at fixed efConstruction). The average number of layers for an element to be added in is a constant that depends on mL: E[/ +1] = E[-In@unif (0,1))-m,]+1=m, +1 Thus, the insertion complexiy scaling is the same as the one for the search, meaning that at least for relatively low dimensional datasets the construction time scales as O(N∙log(N)).
1603.09320#39
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
40
Thus, the insertion complexiy scaling is the same as the one for the search, meaning that at least for relatively low dimensional datasets the construction time scales as O(N∙log(N)). (1) 7 8 IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT ID 4.2.3 Memory cost The memory consumption of the Hierarchical NSW is mostly defined by the storage of graph connections. The number of connections per element is Mmax0 for the zero layer and Mmax for all other layers. Thus, the average memory is (Mmax0+mL ∙Mmax)∙bytes_per_link. If we limit the maximum total number of elements by approximately four billions, we can use four-byte unsigned integers to store the con- nections. Tests suggest that typical close to optimal M values usually lie in a range between 6 and 48. This means that the typical memory requirements for the in- dex (excluding the size of the data) are about 60-450 bytes per object, which is in a good agreement with the simula- tions. set of tests [34] in general metric spaces in which NSW failed (5.3) and comparison to state-of-the-art PQ- algorithms on a large 200M SIFT dataset (5.4).
1603.09320#40
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
41
5.1 Comparison with baseline NSW For the baseline NSW algorithm implementation, we used the “sw-graph” from nmslib 1.1 (which is slightly updat- ed compared to the implementation tested in [33, 34]) to demonstrate the improvements in speed and algorithmic complexity (measured by the number of distance compu- tations). Fig. 12(a) presents a comparison of Hierarchical NSW to the basic NSW algorithm for d=4 random hypercube data made on a Core i5 2400 CPU (10-NN search). Hierar- chical NSW uses much less distance computations during a search on the dataset, especially at high recalls. # 5 PERFORMANCE EVALUATION The Hierarchical NSW algorithm was implemented in C++ on the Non Metric Space Library (nmslib) [49]1, which already had a functional NSW im- plementation (under name “sw-graph”). Due to several limitations posed by the library, to achieve a better per- formance, the Hierarchical NSW implementation uses custom distance functions together with C-style memory management, which avoids unnecessary implicit address- ing and allows efficient hardware and software prefetch- ing during the graph traversal.
1603.09320#41
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
42
Comparing the performance of K-ANNS algorithms is a nontrivial task since the state-of-the-art is constantly changing as new algorithms and implementations are emerging. In this work we concentrated on comparison with the best algorithms in Euclid spaces that have open source implementations. An implementation of the Hier- archical NSW algorithm presented in this paper is also distributed as a part of the open source nmslib library1 together with an external C++ memory-efficient header- only version with support for incremental index construc- tion2. The comparison section consists of four parts: compar- ison to the baseline NSW (5.1), comparison to the state-of- the-art algorithms in Euclid spaces (5.2), rerun of the subThe scalings of the algorithms on a d=8 random hyper- cube dataset for a 10-NN search with a fixed recall of 0.95 are presented in Fig. 12(b). It clearly demostrates that Hi- erarchical NSW has a complexity scaling for this setting not worse than logarithmic and outperforms NSW at any dataset size. The performance advantage in absolute time (Fig. 12(c)) is even higher due to improved algorithm im- plementaion.
1603.09320#42
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
43
5.2 Comparison in Euclid spaces The main part of the comparison was carried out on vec- tor datasets with use of the popular K-ANNS benchmark ann-benchmark3 as a testbed. The testing system utilizes python bindings of the algorithms – it consequentially runs the K-ANN search for one thousand queries (ran- domly extracted from the initial dataset) with preset algo- rithm parameters producing an output containing recall and average time of a single search. The considered algo- rithms are: 1. Baseline NSW algorithm from nmslib 1.1 (“sw-graph”). 2. FLANN 1.8.4 [6]. A popular library4 containing several algorithms, built-in in OpenCV5. We used the available auto-tuning procedure with several reruns to infer the best parameters. 3. Annoy6, 02.02.2016 build. A popular algorithm Fig. 12. Comparison between NSW and Hierarchical NSW: (a) distance calculation number vs accuracy tradeoff for a 10 million 4- dimensional random vectors dataset; (b-c) performance scaling in terms of number of distance calculations (b) and raw query(c) time on a 8-dimensional random vectors dataset. # 1 https://github.com/searchivarius/nmslib 2 https://github.com/nmslib/hnsw
1603.09320#43
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
44
# 1 https://github.com/searchivarius/nmslib 2 https://github.com/nmslib/hnsw # 3 https://github.com/erikbern/ann-benchmarks 4 https://github.com/mariusmuja/flann 5 https://github.com/opencv/opencv 6 https://github.com/spotify/annoy AUTHOR ET AL.: TITLE # TABLE 1 Parameters of the used datasets on vector spaces benchmark. TABLE 2. Used datasets for repetition of the Non-Metric data tests subset. Dataset Description Size | d BF time | Space SIFT Tinage feature vectors [3] a oe GloVe Word embeddings trained on tweets [52] 12M [100 |95ms | cosine CoPhIR MPEG-7 features extracted from the images [53] [2M | 272 |370ms | Le Random vectors | Random vectors in hypercube 30M [4 500ms | Le DEEP ‘One million subset of the billion deep image [IM [96 | 60ms__ | Lz features dataset [14] MNIST Handwritten digit images [54] ok | 784 |22ms | based on random projection tree forest. 4. VP-tree. A general metric space algorithm with metric pruning [50] implemented as a part of nmslib 1.1.
1603.09320#44
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
45
based on random projection tree forest. 4. VP-tree. A general metric space algorithm with metric pruning [50] implemented as a part of nmslib 1.1. 5. FALCONN7, version 1.2. A new efficient LSH al- gorithm for cosine similarity data [51]. Dataset Description Size [d | BFtime | Distance Wikkspase [THIF (erm Wequeq-invese document | EM [10° [59s | Sparseconne frequency) vectors (created via GENSIM [58]) Wikis Topic histograms created from sparse TFIDF|2M | 8 Jensen— vectors of the wiki-sparse dataset (created via Shannon (JS) GENSIM [58]) divergence Wiki-128 Topic histograms created from sparse TF-IDF | 2M | 128 | 1.175 __| Jensen— vectors of the wiki-sparse dataset (created via Shannon (JS) GENSIM [58) divergence TmagNet | Signatures etacted from TSVRG204 with [IM ]272 [1835 | SQRD gq, from the Human Genome 5 [34] slightly faster at high recall compared to the Annoy while strongly outperforms the other algorithms.
1603.09320#45
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
46
slightly faster at high recall compared to the Annoy while strongly outperforms the other algorithms. The comparison was done on a 4X Xeon E5-4650 v2 Debian OS system with 128 Gb of RAM. For every algo- rithm we carefully chose the best results at every recall range to evaluate the best possible performance (with initial values from the testbed defaults). All tests were done in a single thread regime. Hierarchical NSW was compiled using the GCC 5.3 with -Ofast optimization flag. The parameters and description of the used datasets are outlined in Table 1. For all of the datasets except GloVe we used the L2 distance. For GloVe we used the cosine similarity which is equivalent to L2 after vector normalization. The brute-force (BF) time is measured by the nmslib library. Results for the vector data are presented in Fig. 13. For SIFT, GloVE, DEEP and CoPhIR datasets Hierarchical NSW clearly outperforms the rivals by a large margin. For low dimensional data (d=4) Hierarchical NSW is
1603.09320#46
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
47
5.3 Comparison in general spaces A recent comparison of algorithms [34] in general spaces (i.e. non-symmetric or with violation of triangle inequali- ty) showed that the baseline NSW algorithm has severe problems on low dimensional datasets. To test the per- formance of the Hierarchical NSW algorithm we have repeated a subset of tests from [34] on which NSW per- formed poorly or suboptimal. For that purpose we used a built-in nmslib testing system which had scripts to run tests from [34]. The evaluated algorithms included the VP-tree, permutation techniques (NAPP and bruteforce filtering) [49, 55-57], the basic NSW algorithm and NNDescent-produced proximity graphs [29] (both in pair with the NSW graph search algorithm). As in the original tests, for every dataset the test includes the results of ei- ther NSW or NNDescent, depending on which structure performed better. No custom distance functions or special
1603.09320#47
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
48
10 104 10 os oe og 06 06-4 96 Fs 3 5 3 3 Ey ae a ae 04 044 [12M GioVe, 10-NN 04 [2M CoPhIR, 70-NN BF: 95 ms BF: s7oms A Hrachical NSW Pe taeehca Nsw ensw [nsw 02 o24 fee 02 a Annoy ve tiee Vrs pein Se Ftann CS FALCONN 0.0 : - - 1 09 ; - ; 00 7 ; ; - 001 on H 70 100 001 on 1 70 “00 0.01 of i 70 100 Query time, ms Query time, ms Query time, ms 104 104 104 oe] os os 064 _ 06-4 _ 064 8 8 8 @ 4 @ 044 30M random d=4,10-NN/ O44 O44 Br. 590 ms ae Heractical NSW 024 Ce Amoy 024 024 “vetee pent 00 09 0,01 04 1 01 1 10 0,01 01 1 10 100 Query time, ms Query time, ms Query time, ms Fig. 13. Results of the comparison of Hierarchical NSW with open source implementations of K-ANNS algorithms on five datasets for 10- NN searches. The time of a brute-force search is denoted as the BF.
1603.09320#48
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
50
08 06 Recall 04 0.2 0.0 4M Wiki-sparse, 10-NN| BF: 5.95 —y— Hierarchical NSW Nsw ae 08 Recall 10 ‘1M DNA (edit dist), 10-NN BF:245 —y— Hierarchical NSW —A— Old kN (NNDescent) 22M Wiki-8 (JS-div), 10-NN —y— Hierarchical NSW A Old KNN(NNDescent) —* vP-tree —*- vP-tree —=_NAPP. [ecNApp | nape 06 —#— brute-force fit. bin. 0.6 Heer " r r 10 100 1000 100 1000 10000 100000 10 100 1000 Query time, ms 02+ Number of distance computations iva [2M Wiki-128 (JS-div), 10-NN] IBF: 1.178 |—¥— Hierarchical NSW |e Nsw Recall Query time, ms 1,00 0,95 0,90 "iM ImageNet, 10-NN BF: 18.3 |-y— Hierarchical NSW 0,85 |-4—Ns\ |e vP.tree °_ VP-tree |= NAPP |= NAPP [He bruto-force filtering} 0,80 ot 10 100 10 100 1000 Query time, ms Query time, ms
1603.09320#50
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
51
Fig. 14. Results of the comparison of Hierarchical NSW with general space K-ANNS algorithms from the Non Metric Space Library on five datasets for 10-NN searches. The time of a brute-force search is denoted as the BF. TABLE 3. Parameters for comparison between Hierarchical NSW and Faiss on a 200M subset of 1B SIFT dataset. Algorithm Buildtime | Peak memory (runtime) _ | Parameters Hierarchical NSW _| 5.6 hours 64Gb M=16, efConstruction=500 (1) Hierarchical NSW | 42 minutes | 64Gb ‘M=16, efConstruction=40 (2) Faiss 12 hours 30Gb OPQ64, IMI2x14, PQ64 (1) Faiss Ti hours 23.5 Gb OPQ32, IMI2x14, PQ32 (2) memory management were used in this case for Hierar- chical NSW leading to some performance loss. The datasets are summarized in Table 2. Further de- tails of the datasets, spaces and algorithm parameter se- lection can be found in the original work [34]. The brute- force (BF) time is measured by the nmslib library.
1603.09320#51
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
52
[200M SIFT, 1-NN] ce pass 3 _ tm — | ~ tNew 3 L 8 I fed B 3) | — | ee a 0,0 -+— — es es 01 1 10 100 The results are presented in Fig. 14. Hierarchical NSW significantly improves the performance of NSW and is a leader for any of the tested datasets. The strongest en- hancement over NSW, almost by 3 orders of magnitude is observed for the dataset with the lowest dimensionality, the wiki-8 with JS-divergence. This is an important result that demonstrates the robustness of Hierarchical NSW, as for the original NSW this dataset was a stumbling block. Note that for the wiki-8 to nullify the effect of implemen- tation results are presented for the distance computations number instead of the CPU time. # 5.4 Comparison with product quantization based algorithms. Product quantization K-ANNS algorithms [10-17] are considered as the state-of-the-art on billion scale datasets since they can efficiently compress stored data, allowing modest RAM usage while achieving millisecond search times on modern CPUs. # Query time, ms Fig. 15 Results of comparison with Faiss library on the 200M SIFT dataset from [13]. The inset shows the scaling of the query time vs the dataset size for Hierarchical NSW.
1603.09320#52
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
53
against PQ algorithms we used the facebook Faiss library8 as the baseline (a new library with state-of-the-art PQ algorithms [12, 15] implementations, released after the current manuscript was submitted) compiled with the OpenBLAS backend. The tests where done for a 200M subset of 1B SIFT dataset [13] on a 4X Xeon E5-4650 v2 server with 128Gb of RAM. The ann-benchmark testbed was not feasible for these experiments because of its reli- ance on 32-bit floating point format (requiring more than 100 Gb just to store the data). To get the results for Faiss PQ algorithms we have utilized built-in scripts with the parameters from Faiss wiki9. For the Hierarchical NSW algorithm we used a special build outside of the nmslib with a small memory footprint, simple non-vectorized To compare the performance of Hierarchical NSW 8 https://github.com/facebookresearch/faiss 2017 May build. From 2018 Faiss library has its own implementation of Hierarchical NSW. 9 https://github.com/facebookresearch/faiss/wiki/Indexing-1G-vectors AUTHOR ET AL.: TITLE integer distance functions and support for incremental index construction10.
1603.09320#53
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
54
AUTHOR ET AL.: TITLE integer distance functions and support for incremental index construction10. The results are presented in Fig. 15 with summariza- tion of the parameters in Table 3. The peak memory con- sumption was measured by using linux “time –v” tool in separate test runs after index construction for both of the algorithms. Even though Hierarchical NSW requires sig- nificantly more RAM, it can achieve much higher accura- cy, while offering a massive advance in search speed and much faster index construction. The inset in Fig. 15 presents the scaling of the query time vs the dataset size for Hierarchical NSW. Note that the scaling deviates from the pure logarithm, possibly due to relatively high dimensionality of the dataset. # 6 DISCUSSION
1603.09320#54
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
55
# 6 DISCUSSION By using structure decomposition of navigable small world graphs together with the smart neighbor selection heuristic the proposed Hierarchical NSW approach over- comes several important problems of the basic NSW structure advancing the state-of–the-art in K-ANN search. Hierarchical NSW offers an excellent performance and is a clear leader on a large variety of the datasets, surpas- sing the opensource rivals by a large margin in case of high dimensional data. Even for the datasets where the previous algorithm (NSW) has lost by orders of magni- tude, Hierarchical NSW was able to come first. Hierar- chical NSW supports continuous incremental indexing and can also be used as an efficient method for getting approximations of the k-NN and relative neighborhood graphs, which are byproducts of the index construction.
1603.09320#55
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
56
Robustness of the approach is a strong feature which makes it very attractive for practical applications. The algorithm is applicable in generalized metric spaces per- forming the best on any of the datasets tested in this pa- per, and thus eliminating the need for complicated selec- tion of the best algorithm for a specific problem. We stress the importance of the algorithm’s robustness since the data may have a complex structure with different effec- tive dimensionality across the scales. For instance, a da- taset can consist of points lying on a curve that randomly fills a high dimensional cube, thus being high dimension- al at large scale and low dimensional at small scale. In order to perform efficient search in such datasets an ap- proximate nearest neighbor algorithm has to work well for both cases of high and low dimensionality. There are several ways to further increase the efficien- cy and applicability of the Hierarchical NSW approach. There is still one meaningful parameter left which strong- ly affects the construction of the index – the number of added connections per layer M. Potentially, this parame- inferred directly by using different ter can be heuristics [4]. It would also be interesting to compare Hi- erarchical NSW on the full 1B SIFT and 1B DEEP datasets [10-14] and add support for element updates and removal.
1603.09320#56
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
57
One of the apparent shortcomings of the proposed ap10 https://github.com/nmslib/hnsw proach compared to the basic NSW is the loss of the pos- sibility of distributed search. The search in the Hierar- chical NSW structure always starts from the top layer, thus the structure cannot be made distributed by using the same techniques as described in [26] due to cognes- tion of the higher layer elements. Simple workarounds can be used to distribute the structure, such as partition- ing the data across cluster nodes studied in [6], however in this case, the total parallel throughput of the system does not scale well with the number of computer nodes. Still, there are other possible known ways to make this particular structure distributed. Hierarchical NSW is ide- ologically very the well-known one- to dimensional exact search probabilistic skip list structure, and thus can use the same techniques to make the struc- ture distributed [45]. Potentially this can lead to even bet- ter distributed performance compared to the base NSW due to logarithmic scalability and ideally uniform load on the nodes. # 7 ACKNOWLEDGEMENTS
1603.09320#57
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
58
# 7 ACKNOWLEDGEMENTS We thank Leonid Boytsov for many helpful discussions, assistance with Non-Metric Space Library integration and comments on the manuscript. We thank Seth Hoffert and Azat Davletshin for the suggestions on the manuscript and the algorithm and fellows who contributed to the algorithm on the github repository. We also thank Valery Kalyagin for support of this work. The reported study was funded by RFBR, according to the research project No. 16-31-60104 mol_а _dk. 8 REFERENCES [1] D. G. Lowe, "Distinctive image features from scale-invariant keypoints," International journal of computer vision, vol. 60, no. 2, pp. 91-110, 2004. [2] S. Deerwester, S. T. Dumais, T. K. Landauer, G. W. Furnas, and R. A. Harshman, "Indexing by Latent Semantic Analysis," J. Amer. Soc. Inform. Sci., vol. 41, pp. 391-407, 1990. [3] P. N. Yianilos, "Data structures and algorithms for nearest neighbor search in general metric spaces," in SODA, 1993, vol. 93, no. 194, pp. 311-321.
1603.09320#58
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
59
[4] G. Navarro, "Searching in metric spaces by spatial approxima- tion," The VLDB Journal, vol. 11, no. 1, pp. 28-46, 2002. [5] E. S. Tellez, G. Ruiz, and E. Chavez, "Singleton indexes for nearest neighbor search," Information Systems, 2016. [6] M. Muja and D. G. Lowe, "Scalable nearest neighbor algorithms for high dimensional data," Pattern Analysis and Machine Intelli- gence, IEEE Transactions on, vol. 36, no. 11, pp. 2227-2240, 2014. [7] M. E. Houle and M. Nett, "Rank-based similarity search: Reduc- ing the dimensional dependence," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 37, no. 1, pp. 136-150, 2015. [8] A. Andoni, P. Indyk, T. Laarhoven, I. Razenshteyn, and L. Schmidt, "Practical and optimal LSH for angular distance," in Advances in Neural Information Processing Systems, 2015, pp. 1225-1233.
1603.09320#59
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
60
[9] P. Indyk and R. Motwani, "Approximate nearest neighbors: towards removing the curse of dimensionality," in Proceedings of the thirtieth annual ACM symposium on Theory of computing, 1998, pp. 604-613: ACM. 11 12 [10] J. Wang, J. Wang, G. Zeng, R. Gan, S. Li, and B. Guo, "Fast neighborhood graph search using cartesian concatenation," in Multimedia Data Mining and Analytics: Springer, 2015, pp. 397- 417. [11] M. Norouzi, A. Punjani, and D. J. Fleet, "Fast exact search in hamming space with multi-index hashing," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 36, no. 6, pp. 1107-1119, 2014. [12] A. Babenko and V. Lempitsky, "The inverted multi-index," in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Con- ference on, 2012, pp. 3069-3076: IEEE.
1603.09320#60
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
61
[13] H. Jegou, M. Douze, and C. Schmid, "Product quantization for nearest neighbor search," Pattern Analysis and Machine Intelli- gence, IEEE Transactions on, vol. 33, no. 1, pp. 117-128, 2011. [14] A. Babenko and V. Lempitsky, "Efficient indexing of billion- scale datasets of deep descriptors," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2055-2063. [15] M. Douze, H. Jégou, and F. Perronnin, "Polysemous codes," in European Conference on Computer Vision, 2016, pp. 785-801: Springer. [16] Y. Kalantidis and Y. Avrithis, "Locally optimized product quan- tization for approximate nearest neighbor search," in Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recog- nition, 2014, pp. 2321-2328. [17] P. Wieschollek, O. Wang, A. Sorkine-Hornung, and H. Lensch, "Efficient large-scale approximate nearest neighbor search on the gpu," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2027-2035.
1603.09320#61
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
62
[18] S. Arya and D. M. Mount, "Approximate Nearest Neighbor Queries in Fixed Dimensions," in SODA, 1993, vol. 93, pp. 271- 280. [19] J. Wang and S. Li, "Query-driven iterated neighborhood graph search for large scale indexing," in Proceedings of the 20th ACM international conference on Multimedia, 2012, pp. 179-188: ACM. [20] Z. Jiang, L. Xie, X. Deng, W. Xu, and J. Wang, "Fast Nearest Neighbor Search in the Hamming Space," in MultiMedia Model- ing, 2016, pp. 325-336: Springer. [21] E. Chávez and E. S. Tellez, "Navigating k-nearest neighbor graphs to solve nearest neighbor searches," in Advances in Pat- tern Recognition: Springer, 2010, pp. 270-280. [22] K. Aoyama, K. Saito, H. Sawada, and N. Ueda, "Fast approxi- mate similarity search based on degree-reduced neighborhood graphs," in Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, 2011, pp. 1055- 1063: ACM.
1603.09320#62
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
63
[23] G. Ruiz, E. Chávez, M. Graff, and E. S. Téllez, "Finding Near Neighbors Through Local Search," in Similarity Search and Ap- plications: Springer, 2015, pp. 103-109. [24] R. Paredes, "Graphs for metric space searching," PhD thesis, University of Chile, Chile, 2008. Dept. of Computer Science Tech at TR/DCC-2008-10. http://www.dcc.uchile.cl/~raparede/publ/08PhDthesis.pdf, 2008. [25] Y. Malkov, A. Ponomarenko, A. Logvinov, and V. Krylov, "Scalable distributed algorithm for approximate nearest neigh- bor search problem in high dimensional general metric spaces," in Similarity Search and Applications: Springer Berlin Heidelberg, 2012, pp. 132-147.
1603.09320#63
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
64
[26] Y. Malkov, A. Ponomarenko, A. Logvinov, and V. Krylov, "Ap- proximate nearest neighbor algorithm based on navigable small world graphs," Information Systems, vol. 45, pp. 61-68, 2014. [27] W. Pugh, "Skip lists: a probabilistic alternative to balanced trees," Communications of the ACM, vol. 33, no. 6, pp. 668-676, 1990. IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT ID [28] C. C. Cartozo and P. De Los Rios, "Extended navigability of small world networks: exact results and new insights," Physical review letters, vol. 102, no. 23, p. 238703, 2009. [29] W. Dong, C. Moses, and K. Li, "Efficient k-nearest neighbor graph construction for generic similarity measures," in Proceed- ings of the 20th international conference on World wide web, 2011, pp. 577-586: ACM. [30] A. Ponomarenko, Y. Malkov, A. Logvinov, and V. Krylov, "Ap- proximate Nearest Neighbor Search Small World Approach," in International Conference on Information and Communication Tech- nologies & Applications, Orlando, Florida, USA, 2011.
1603.09320#64
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
65
[31] J. M. Kleinberg, "Navigation in a small world," Nature, vol. 406, no. 6798, pp. 845-845, 2000. [32] M. Boguna, D. Krioukov, and K. C. Claffy, "Navigability of complex networks," Nature Physics, vol. 5, no. 1, pp. 74-80, 2009. [33] A. Ponomarenko, N. Avrelin, B. Naidan, and L. Boytsov, "Comparative Analysis of Data Structures for Approximate Nearest Neighbor Search," In Proceedings of The Third Interna- tional Conference on Data Analytics, 2014. [34] B. Naidan, L. Boytsov, and E. Nyberg, "Permutation search methods are efficient, yet faster search is possible," VLDB Pro- cedings, vol. 8, no. 12, pp. 1618-1629, 2015. [35] D. Krioukov, F. Papadopoulos, M. Kitsak, A. Vahdat, and M. Boguná, "Hyperbolic geometry of complex networks," Physical Review E, vol. 82, no. 3, p. 036106, 2010.
1603.09320#65
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
66
[36] A. Gulyás, J. J. Bíró, A. Kőrösi, G. Rétvári, and D. Krioukov, "Navigable networks as Nash equilibria of navigation games," Nature Communications, vol. 6, p. 7651, 2015. [37] Y. Lifshits and S. Zhang, "Combinatorial algorithms for nearest neighbors, near-duplicates and small-world design," in Proceed- ings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, 2009, pp. 318-326: Society for Industrial and Ap- plied Mathematics. [38] A. Karbasi, S. Ioannidis, and L. Massoulie, "From Small-World Networks to Comparison-Based Search," Information Theory, IEEE Transactions on, vol. 61, no. 6, pp. 3056-3074, 2015. [39] O. Beaumont, A.-M. Kermarrec, and É. Rivière, "Peer to peer multidimensional overlays: Approximating complex struc- tures," in Principles of Distributed Systems: Springer, 2007, pp. 315-328.
1603.09320#66
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
67
[40] O. Beaumont, A.-M. Kermarrec, L. Marchal, and É. Rivière, "VoroNet: A scalable object network based on Voronoi tessella- tions," in Parallel and Distributed Processing Symposium, 2007. IPDPS 2007. IEEE International, 2007, pp. 1-10: IEEE. [41] J. Kleinberg, "The small-world phenomenon: An algorithmic perspective," in Proceedings of the thirty-second annual ACM sym- posium on Theory of computing, 2000, pp. 163-170: ACM. [42] J. Travers and S. Milgram, "An experimental study of the small world problem," Sociometry, pp. 425-443, 1969. [43] D. J. Watts and S. H. Strogatz, "Collective dynamics of ‘small- world’networks," Nature, vol. 393, no. 6684, pp. 440-442, 1998. [44] Y. A. Malkov and A. Ponomarenko, "Growing homophilic net- works are natural navigable small worlds," PloS one, p. e0158162, 2016.
1603.09320#67
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
68
[45] M. T. Goodrich, M. J. Nelson, and J. Z. Sun, "The rainbow skip graph: a fault-tolerant constant-degree distributed data struc- ture," in Proceedings of the seventeenth annual ACM-SIAM sympo- sium on Discrete algorithm, 2006, pp. 384-393: Society for Indus- trial and Applied Mathematics. [46] G. T. Toussaint, "The relative neighbourhood graph of a finite planar set," Pattern recognition, vol. 12, no. 4, pp. 261-268, 1980. [47] B. Harwood and T. Drummond, "FANNG: fast approximate nearest neighbour graphs," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 5713-5722. [48] R. A. Dwyer, "Higher-dimensional Voronoi diagrams in linear expected time," Discrete & Computational Geometry, vol. 6, no. 3, pp. 343-367, 1991. AUTHOR ET AL.: TITLE [49] L. Boytsov and B. Naidan, "Engineering Efficient and Effective Non-metric Space Library," in Similarity Search and Applications: Springer, 2013, pp. 280-293.
1603.09320#68
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
69
[50] L. Boytsov and B. Naidan, "Learning to prune in metric and non-metric spaces," in Advances in Neural Information Processing Systems, 2013, pp. 1574-1582. [51] A. Andoni and I. Razenshteyn, "Optimal Data-Dependent Hashing for Approximate Near Neighbors," presented at the Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, Portland, Oregon, USA, 2015. [52] J. Pennington, R. Socher, and C. D. Manning, "Glove: Global vectors for word representation," Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), vol. 12, pp. 1532-1543, 2014. [53] P. Bolettieri et al., "CoPhIR: a test collection for content-based image retrieval," arXiv preprint arXiv:0905.4627, 2009. [54] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
1603.09320#69
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
70
[55] E. Chávez, M. Graff, G. Navarro, and E. Téllez, "Near neighbor searching with K nearest references," Information Systems, vol. 51, pp. 43-61, 2015. [56] E. C. Gonzalez, K. Figueroa, and G. Navarro, "Effective proxim- ity retrieval by ordering permutations," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 30, no. 9, pp. 1647-1658, 2008. [57] E. S. Tellez, E. Chávez, and G. Navarro, "Succinct nearest neighbor search," Information Systems, vol. 38, no. 7, pp. 1019- 1030, 2013. [58] P. Sojka, "Software framework for topic modelling with large corpora," in In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, 2010: Citeseer. [59] C. Beecks, "Distance-based similarity models for content-based multimedia retrieval," Hochschulbibliothek der Rheinisch- Westfälischen Technischen Hochschule Aachen, 2013.
1603.09320#70
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.09320
71
Yury A. Malkov received a Master’s degree in physics from Nizhny Novgorod State University in 2009, and a PhD degree in laser physics from the Institute of Applied Physics RAS in 2015. He is author of 20+ papers on physics and computer science. Yury currently occupies a position of a Project Leader in Samsung AI Center in Moscow. His current research interests include deep learn- ing, scalable similarity search, biological and artificial neural networks. Dmitry A. Yashunin received a Master’s degree in physics from Nizhny Novgorod State University in 2009, and a PhD degree in laser physics from the Institute of Applied Physics RAS in 2015. From 2008 to 2012 he was working in Mera Net- works. He is author of 10+ papers on physics. Dmitry currently woks at Intelli-Vision in the posi- tion of a leading research engineer. His current research interests include scalable similarity search, computer vision and deep learn- ing. 13
1603.09320#71
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs
We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation.
http://arxiv.org/pdf/1603.09320
Yu. A. Malkov, D. A. Yashunin
cs.DS, cs.CV, cs.IR, cs.SI
13 pages, 15 figures
null
cs.DS
20160330
20180814
[]
1603.08983
0
7 1 0 2 b e F 1 2 ] E N . s c [ 6 v 3 8 9 8 0 . 3 0 6 1 : v i X r a # Adaptive Computation Time for Recurrent Neural Networks Alex Graves Google DeepMind [email protected] # Abstract This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neu- ral networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differen- tiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide in- triguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data. # Introduction
1603.08983#0
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
1
# Introduction The amount of time required to pose a problem and the amount of thought required to solve it are notoriously unrelated. Pierre de Fermat was able to write in a margin the conjecture (if not the proof) of a theorem that took three and a half centuries and reams of mathematics to solve [35]. More mundanely, we expect the effort required to find a satisfactory route between two cities, or the number of queries needed to check a particular fact, to vary greatly, and unpredictably, from case to case. Most machine learning algorithms, however, are unable to dynamically adapt the amount of computation they employ to the complexity of the task they perform.
1603.08983#1
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
2
For artificial neural networks, where the neurons are typically arranged in densely connected layers, an obvious measure of computation time is the number of layer-to-layer transformations the network performs. In feedforward networks this is controlled by the network depth, or number of layers stacked on top of each other. For recurrent networks, the number of transformations also depends on the length of the input sequence — which can be padded or otherwise extended to allow for extra computation. The evidence that increased depth leads to more performant networks is by now inarguable [5, 4, 19, 9], and recent results show that increased sequence length can be similarly beneficial [31, 33, 25]. However it remains necessary for the experimenter to decide a priori on the amount of computation allocated to a particular input vector or sequence. One solution is to simply 1 make every network very deep and design its architecture in such a way as to mitigate the vanishing gradient problem [13] associated with long chains of iteration [29, 17]. However in the interests of both computational efficiency and ease of learning it seems preferable to dynamically vary the number of steps for which the network ‘ponders’ each input before emitting an output. In this case the effective depth of the network at each step along the sequence becomes a dynamic function of the inputs received so far.
1603.08983#2
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
3
The approach pursued here is to augment the network output with a sigmoidal halting unit whose activation determines the probability that computation should continue. The resulting halting distribution is used to define a mean-field vector for both the network output and the internal network state propagated along the sequence. A stochastic alternative would be to halt or continue according to binary samples drawn from the halting distribution—a technique that has recently been applied to scene understanding with recurrent networks [7]. However the mean-field approach has the advantage of using a smooth function of the outputs and states, with no need for stochastic gradient estimates. We expect this to be particularly beneficial when long sequences of halting decisions must be made, since each decision is likely to affect all subsequent ones, and sampling noise will rapidly accumulate (as observed for policy gradient methods [36]).
1603.08983#3
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
4
A related architecture known as Self-Delimiting Neural Networks [26, 30] employs a halting neuron to end a particular update within a large, partially activated network; in this case however a simple activation threshold is used to make the decision, and no gradient with respect to halting time is propagated. More broadly, learning when to halt can be seen as a form of conditional computing, where parts of the network are selectively enabled and disabled according to a learned policy [3, 6]. We would like the network to be parsimonious in its use of computation, ideally limiting itself to the minimum number of steps necessary to solve the problem. Finding this limit in its most general form would be equivalent to determining the Kolmogorov complexity of the data (and hence solving the halting problem) [21]. We therefore take the more pragmatic approach of adding a time cost to the loss function to encourage faster solutions. The network then has to learn to trade off accuracy against speed, just as a person must when making decisions under time pressure. One weakness is that the numerical weight assigned to the time cost has to be hand-chosen, and the behaviour of the network is quite sensitive to its value. The rest of the paper is structured as follows: the Adaptive Computation Time algorithm is presented in Section 2, experimental results on four synthetic problems and one real-world dataset are reported in Section 3, and concluding remarks are given in Section 4.
1603.08983#4
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
5
# 2 Adaptive Computation Time Consider a recurrent neural network R composed of a matrix of input weights Wx, a parametric state transition model S, a set of output weights Wy and an output bias by. When applied to an input sequence x = (x1, . . . , xT ), R computes the state sequence s = (s1, . . . , sT ) and the output sequence y = (y1, . . . , yT ) by iterating the following equations from t = 1 to T : (1) # st = S(st−1, Wxxt) yt = Wyst + by (2) The state is a fixed-size vector of real numbers containing the complete dynamic information of the network. For a standard recurrent network this is simply the vector of hidden unit activations. For a Long Short-Term Memory network (LSTM) [14], the state also contains the activations of the memory cells. For a memory augmented network such as a Neural Turing Machine (NTM) [10], the state contains both the complete state of the controller network and the complete state of the memory. In general some portions of the state (for example the NTM memory contents) will not be visible to the output units; in this case we consider the corresponding columns of Wy to be fixed to 0. 2
1603.08983#5
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
6
2 Adaptive Computation Time (ACT) modifies the conventional setup by allowing R to perform a variable number of state transitions and compute a variable number of outputs at each input step. Let N (t) be the total number of updates performed at step t. Then define the intermediate state sequence (s1 S(st−1, x1 S(sn−1 , xn t t = Wysn yn t + by sn t = t ) if n = 1 t ) otherwise (3) (4) where xn t = xt + δn,1 is the input at time t augmented with a binary flag that indicates whether the input step has just been incremented, allowing the network to distinguish between repeated inputs and repeated computations for the same input. Note that the same state function is used for all state transitions (intermediate or otherwise), and similarly the output weights and bias are shared for all outputs. It would also be possible to use different state and output parameters for each intermediate step; however doing so would cloud the distinction between increasing the number of parameters and increasing the number of computational steps. We leave this for future work. To determine how many updates R performs at each input step an extra sigmoidal halting unit h is added to the network output, with associated weight matrix Wh and bias bh:
1603.08983#6
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
7
To determine how many updates R performs at each input step an extra sigmoidal halting unit h is added to the network output, with associated weight matrix Wh and bias bh: t = σ (Whsn hn t + bh) (5) As with the output weights, some columns of Wh may be fixed to zero to give selective access to the network state. The activation of the halting unit is then used to determine the halting probability pn t of the intermediate steps: pn t = R(t) if n = N (t) hn t otherwise (6) where , n N(t) = min{n’: Ss h? >=1-e} (7) n=l the remainder R(t) is defined as follows N(t)-1 R(t)=1— So ap (8) n=1 and € is a small constant (0.01 for the experiments in this paper), whose purpose is to allow compu- tation to halt after a single update if h} >= 1 —, as otherwise a minimum of two updates would be required for every input step. It follows directly from the definition that yh) p; = 1 and 0 < p? <1 Vn, so this is a valid probability distribution. A similar distribution was recently used to define differentiable push and pop operations for neural stacks and queues [i].
1603.08983#7
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
10
Figure 2: RNN Computation Graph with Adaptive Computation Time. The graph is equivalent to Figure 1, only with each state and output computation expanded to a variable number of intermediate updates. Arrows touching boxes denote operations applied to all units in the box, while arrows leaving boxes denote summations over all units in the box. properties the vectors embody. There are several reasons to believe that such an assumption is reasonable. Firstly, it has been observed that the high-dimensional representations present in neu- ral networks naturally tend to behave in a linear way [32, 20], even remaining consistent under arithmetic operations such as addition and subtraction [22]. Secondly, neural networks have been successfully trained under a wide range of adversarial regularisation constraints, including sparse internal states [23], stochastically masked units [28] and randomly perturbed weights [1]. This leads us to believe that the relatively benign constraint of approximately linear representations will not be too damaging. Thirdly, as training converges, the tendency for both mean-field and stochastic latent variables is to concentrate all the probability mass on a single value. In this case that yields a standard RNN with each input duplicated a variable, but deterministic, number of times, rendering the linearity assumption irrelevant. A diagram of the unrolled computation graph of a standard RNN is illustrated in Figure 1, while Figure 2 provides the equivalent diagram for an RNN trained with ACT. 4
1603.08983#10
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
11
A diagram of the unrolled computation graph of a standard RNN is illustrated in Figure 1, while Figure 2 provides the equivalent diagram for an RNN trained with ACT. 4 # 2.1 Limiting Computation Time If no constraints are placed on the number of updates R can take at each step it will naturally tend to ‘ponder’ each input for as long as possible (so as to avoid making predictions and incurring errors). We therefore require a way of limiting the amount of computation the network performs. Given a length T input sequence x, define the ponder sequence (ρ1, . . . , ρT ) of R as ρt = N (t) + R(t) (10) and the ponder cost P(x) as T P(x) = opt (11) t=1 Since R(t) € (0,1), P(x) is an upper bound on the ( to reduce, namely the total computation a N(t) during the sequen non-differentiable) property we ultimately want We can encourage the network to minimise P(x) by modifying the sequence loss function L(x, y) used for training: ˆL(x, y) = L(x, y) + τ P(x) (12)
1603.08983#11
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
12
ˆL(x, y) = L(x, y) + τ P(x) (12) where τ is a time penalty parameter that weights the relative cost of computation versus error. As we will see in the experiments section the behaviour of the network is quite sensitive to the value of τ , and it is not obvious how to choose a good value. If computation time and prediction error can be meaningfully equated (for example if the relative financial cost of both were known) a more principled technique for selecting τ should be possible. To prevent very long sequences at the beginning of training (while the network is learning how to use the halting unit) the bias term bh can be initialised to a positive value. In addition, a hard limit M on the maximum allowed value of N (t) can be imposed to avoid excessive space and time costs. In this case Equation (7) is modified to N(t) = min{M, min{n’ : S hf? >=1-e}} (13) n=1 # 2.2 Error Gradients
1603.08983#12
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
13
N(t) = min{M, min{n’ : S hf? >=1-e}} (13) n=1 # 2.2 Error Gradients The ponder costs p; are discontinuous with respect to the halting probabilities at the points where N(t) increments or decrements (that is, when the summed probability mass up to some n either decreases below or increases above 1 — €). However they are continuous away from those points, as N(t) remains constant and R(t) is a linear function of the probabilities. In practice we simply ignore the discontinuities by treating N(t) as constant and minimising R(t) everywhere. Given this approximation, the gradient of the ponder cost with respect to the halting activations is straightforward: ∂P(x) ∂hn t = 0 if n = N (t) −1 otherwise (14)
1603.08983#13
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
14
∂P(x) ∂hn t = 0 if n = N (t) −1 otherwise (14) For a stochastic ACT network, a more natural halting distribution than the one described in Equations to would be to simply treat h? as the probability of halting at step n, in which case p? = h? nae nr’), One could nf=1 then set pe = ri np; — i.e. the expected ponder time under the stochastic distribution. However experiments show that networks trained to minimise expected rather than total halting time learn to ‘cheat’ in the following ingenious way: they set ht to a value just below the halting threshold, then keep h} = 0 until some N(t) when they set nO high enough to ensure they halt. In this case pp <p}, so the states and outputs at n = N(t) have much lower weight in the mean field updates (Equation (9}) than those at n = 1; however by making the magnitudes of the states and output vectors much larger at N(t) than n = 1 the network can still ensure that the update is dominated by the final vectors, despite having paid a low ponder penalty. 5 and hence
1603.08983#14
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
15
5 and hence ∂ ˆL(x, y) ∂hn t = ∂L(x, y) ∂hn t − 0 if n = N (t) τ otherwise (15) The halting activations only influence L via their effect on the halting probabilities, therefore a£(x,y) <8 ac(x,y) apy ane Ope Oh (16) nf=1 Furthermore, since the halting probabilities only influence L via their effect on the states and outputs, it follows from Equation (9) that ∂L(x, y) ∂pn t = ∂L(x, y) ∂yt yn t + ∂L(x, y) ∂st sn t (17) while, from Equations (6) and (8) , Sn ifn’ < N(t) and n < N(t) —lifn’ = N(t) and n < N(t) (18) 0 if n = N(t) Combining Equations (15), (17) and (18) gives, for n < N (t)
1603.08983#15
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
16
Combining Equations (15), (17) and (18) gives, for n < N (t) dL(x%y) _ ALY) (nw) , OLOGY) (on NW) _ ~The Oye (uh we) GES (sh 8) 1 (19) while for n = N (t) ∂ ˆL(x, y) ∂hN (t) t = 0 (20) Thereafter the network can be differentiated as usual (e.g. with backpropagation through time [36]) and trained with gradient descent. # 3 Experiments We tested recurrent neural networks (RNNs) with and without ACT on four synthetic tasks and one real-world language processing task. LSTM was used as the network architecture for all experiments except one, where a simple RNN was used. However we stress that ACT is equally applicable to any recurrent architecture.
1603.08983#16
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
17
All the tasks were supervised learning problems with discrete targets and cross-entropy loss. The data for the synthetic tasks was generated online and cross-validation was therefore not needed. Similarly, the character prediction dataset was sufficiently large that the network did not overfit. The performance metric for the synthetic tasks was the sequence error rate: the fraction of examples where any mistakes were made in the complete output sequence. This metric is useful as it is trivial to evaluate without decoding. For character prediction the metric was the average log-loss of the output predictions, in units of bits per character. Most of the training parameters were fixed for all experiments: Adam was used for optimi- sation with a learning rate of 10-4, the Hogwild! algorithm was used for asynchronous training with 16 threads; the initial halting unit bias b, mentioned in Equation (5) was 1; the « term from Equation was 0.01. The synthetic tasks were all trained for 1M iterations, where an iteration 6 loco Input seq. Target seq. Figure 3: Parity training Example. Each sequence consists of a single input and target vector. Only 8 of the 64 input bits are shown for clarity.
1603.08983#17
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
18
6 loco Input seq. Target seq. Figure 3: Parity training Example. Each sequence consists of a single input and target vector. Only 8 of the 64 input bits are shown for clarity. is defined as a weight update on a single thread (hence the total number of weight updates is ap- proximately 16 times the number of iterations). The character prediction task was trained for 10K iterations. Early stopping was not used for any of the experiments. A logarithmic grid search over time penalties was performed for each experiment, with 20 ran- domly initialised networks trained for each value of τ . For the synthetic problems the range of the grid search was from i × 10−j with integer i in the range 1–10 and the exponent j in the range 1–4. For the language modelling task, which took many days to complete, the range of j was limited to 1–3 to reduce training time (lower values of τ , which naturally induce more pondering, tend to give greater data efficiency but slower wall clock training time). Unless otherwise stated the maximum computation time M (Equation (13)) was set to 100. In all experiments the networks converged on learned values of N (t) that were far less than M , which functions mainly as safeguard against excessively long ponder times early in training. # 3.1 Parity
1603.08983#18
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
19
# 3.1 Parity Determining the parity of a sequence of binary numbers is a trivial task for a recurrent neural network [27], which simply needs to implement an internal switch that changes sign every time a one is received. For shallow feedforward networks receiving the entire sequence in one vector, however, the number of distinct input patterns, and hence difficulty of the task, grows exponentially with the number of bits. We gauged the ability of ACT to infer an inherently sequential algorithm from statically presented data by presenting large binary vectors to the network and asking it to determine the parity. By varying the number of binary bits for which parity must be calculated we were also able to assess ACT’s ability to adapt the amount of computation to the difficulty of the vector.
1603.08983#19
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
20
The input vectors had 64 elements, of which a random number from 1 to 64 were randomly set to 1 or −1 and the rest were set to 0. The corresponding target was 1 if there was an odd number of ones and 0 if there was an even number of ones. Each training sequence consisted of a single input and target vector, an example of which is shown in Figure 3. The network architecture was a simple RNN with a single hidden layer containing 128 tanh units and a single sigmoidal output unit, trained with binary cross-entropy loss on minibatches of size 128. Note that without ACT the recurrent connection in the hidden layer was never used since the data had no sequential component, and the network reduced to a feedforward network with a single hidden layer. Figure 4 demonstrates that the network was unable to reliably solve the problem without ACT, with a mean of almost 40% error compared to 50% for random guessing. For penalties of 0.03 and below the mean error was below 5%. Figure 5 reveals that the solutions were both more rapid and more accurate with lower time penalties. It also highlights the relationship between the time penalty, the classification error rate and the average ponder time per input. The variance in ponder time for low τ networks is very high, indicating that many correct solutions with widely varying runtime can be discovered. We speculate that progressively higher τ values lead the network to compute 7
1603.08983#20
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
21
7 0.007 - 0.008 om 0.002 ‘ 0.003 - 8 > s 6 0.40 $ 035 0.30 © 025 E 0.20 W015 0.10 @ 0.05 D 0.00 mala on a ie | i = © Py = 2 © 2 s 2 = 2 © e @ ea 3 $ 8 3 $3 8 g és se 8 § 8 8 & 3 8 8 8 8s 8 8 $6 6 6 6 6 Ss 3 S 3 6 3s o6 6 3 3 3 0.0002 i 0.0003 } 0.0005 0.0006 ft 0.0007 f 0.0009 t No ACT Time Penalty Figure 4: Parity Error Rates. Bar heights show the mean error rates for different time penalties at the end of training. The error bars show the standard error in the mean.
1603.08983#21
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
22
Figure 4: Parity Error Rates. Bar heights show the mean error rates for different time penalties at the end of training. The error bars show the standard error in the mean. 0.5 0.5 Time Penalty — 0.0001 0.0002 0.0003, — 0.0008 0.4 4 Be Oe eee eee eee eee See — 0.0005 2 £ = ovoos S © ° — 0.0007 . 0.0008 ra © 03 ooo . 0.001 gs e . . — o02 — 0003 w wi —— 0004 o © . — 000s Q Qo? s . 0.006 € 02 © : 0.007 oO oO — 0008 5 =] 0.009 lo” lo” — 001 & & or — oo2 — 003 01 0.04 005 006 9.0 — 007 — ove 0.0 mony = ———— — 009 ) 200000 400000 600000 800000 1000000 oO 2 4 6 8 —o 5 = without acr Iterations Ponder
1603.08983#22
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
23
Figure 5: Parity Learning Curves and Error Rates Versus Ponder Time. Left: faint coloured curves show the errors for individual runs. Bold lines show the mean errors over all 20 runs for each τ value. ‘Iterations’ is the number of gradient updates per asynchronous worker. Right: Small circles represent individual runs after training is complete, large circles represent the mean over 20 runs for each τ value. ‘Ponder’ is the mean number of computation steps per input timestep (minimum 1). The black dotted line shows the mean error for the networks without ACT. The height of the ellipses surrounding the mean values represents the standard error over error rates for that value of τ , while the width shows the standard error over ponder times. the parities of successively larger chunks of the input vector at each ponder step, then iteratively combine these calculations to obtain the parity of the complete vector.
1603.08983#23
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
24
the parities of successively larger chunks of the input vector at each ponder step, then iteratively combine these calculations to obtain the parity of the complete vector. Figure 6 shows that for the networks without ACT and those with overly high time penalties, the error rate increases sharply with the difficulty of the task (where difficulty is defined as the number of bits whose parity must be determined), while the amount of ponder remains roughly constant. For the more successful networks, with intermediate τ values, ponder time appears to grow linearly with difficulty, with a slope that generally increases as τ decreases. Even for the best networks the error rate increased somewhat with difficulty. For some of the lowest τ networks there is a dramatic increase in ponder after about 32 bits, suggesting an inefficient algorithm. # 3.2 Logic Like parity, the logic task tests if an RNN with ACT can sequentially process a static vector. Unlike parity it also requires the network to internally transfer information across successive input timesteps, thereby testing whether ACT can propagate coherent internal states.
1603.08983#24
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
25
Each input sequence consists of a random number from 1 to 10 of size 102 input vectors. The first two elements of each input represent a pair of binary numbers; the remainder of the vector is divided up into 10 chunks of size 10. The first B chunks, where B is a random number from 8 Ponder 0 50 60 10 20 30 40 50 60 =o 30 4 Difficulty Difficulty werhour AT Figure 6: Parity Ponder Time and Error Rate Versus Input Difficulty. Faint lines are individual runs, bold lines are means over 20 networks. ‘Difficulty’ is the number of bits in the parity vectors, with a mean over 1,000 random vectors used for each data-point. Table 1: Binary Truth Tables for the Logic Task P Q NOR Xq ABJ XOR NAND AND XNOR if/then T T T F F T F F F F F T F F T F F T F F F T T F F T T T T F F F T F F T T F T T then/if OR T T F T T T T F
1603.08983#25
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
26
1 to 10, contain one-hot representations of randomly chosen numbers between 1 and 10; each of these numbers correspond to an index into the subset of binary logic gates whose truth tables are listed in Table 1. The remaining 10 − B chunks were zeroed to indicate that no further binary operations were defined for that vector. The binary target bB+1 for each input is the truth value yielded by recursively applying the B binary gates in the vector to the two initial bits b1, b0. That is for 1 ≤ b ≤ B: bi+1 = Ti(bi, bi−1) (21) where Ti(., .) is the truth table indexed by chunk i in the input vector.
1603.08983#26
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
27
bi+1 = Ti(bi, bi−1) (21) where Ti(., .) is the truth table indexed by chunk i in the input vector. For the first vector in the sequence, the two input bits b0, b1 were randomly chosen to be false (0) or true (1) and assigned to the first two elements in the vector. For subsequent vectors, only b1 was random, while b0 was implicitly equal to the target bit from the previous vector (for the purposes of calculating the current target bit), but was always set to zero in the input vector. To solve the task, the network therefore had to learn both how to calculate the sequence of binary operations represented by the chunks in each vector, and how to carry the final output of that sequence over to the next timestep. An example input-target sequence pair is shown in Figure 7. The network architecture was single-layer LSTM with 128 cells. The output was a single sigmoidal unit, trained with binary cross-entropy, and the minibatch size was 16.
1603.08983#27
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
28
Figure 8 shows that the network reaches a minimum sequence error rate of around 0.2 without ACT (compared to 0.5 for random guessing), and virtually zero error for all τ ≤ 0.01. From Figure 9 it can be seen that low τ ACT networks solve the task very quickly, requiring about 10,000 training iterations. For higher τ values ponder time reduces to 1, at which point the networks trained with ACT behave identically to those without. For lower τ values, the spread of ponder values, and hence computational cost, is quite large. Again we speculate that this is due to the network learning more or less ‘chunked’ solutions in which composite truth table are learned for multiple successive logic operations. This is somewhat supported by the clustering of the lowest τ networks around a ponder time of 5–6, which is approximately the mean number of logic gates applied per sequence, 9 F w i 0} |O} |0 a = Gate2 /1| /0] |0 = = o| fol |4 xX 3 ae od Su g O} jO| |4 Z2PEL Gatet |0) ]1) jo) —r> EES folly input bits al lol lo ollallo Input seq. Target seq.
1603.08983#28
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
29
Figure 7: Logic training Example. Both the input and target sequences consist of 3 vectors. For simplicity only 2 of the 10 possible logic gates represented in the input are shown, and each is restricted to one of the first 3 gates in Table 1 (NOR, Xq, and ABJ). The segmentation of the input vectors is show on the left and the recursive application of Equation (21) required to determine the targets (and subsequent b0 values) is shown in italics above the target vectors. 0.15 i 0.10 goes mL [o % 0.00 : oi gs 3 a 4 8 8 gos 3 2 s 36 3 = 89 9 ¢ 8 © & 2 @ee me» © re B® ee Ps or @ 5 sss 8 8 8 § 8§ 8 8s §& § §$ &€& & & § BB é6¢ 8 6 8 is} §ss 8 88888 8S 8 8 S&B EEBSB SB $eé eee x eesegeegeeegsegeecscss6 56 6 6 6 6 Ps 6 6 6 6 6 6 6 6 6 s Time Penalty Figure 8: Logic Error Rates. and hence the minimum number of computations the network would need if calculating single binary operations at a time.
1603.08983#29
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
30
Figure 8: Logic Error Rates. and hence the minimum number of computations the network would need if calculating single binary operations at a time. Figure 10 shows a surprisingly high ponder time for the least difficult inputs, with some networks taking more than 10 steps to evaluate a single logic gate. From 5 to 10 logic gates, ponder gradually increases with difficulty as expected, suggesting that a qualitatively different solution is learned for the two regimes. This is supported by the error rates for the non ACT and high τ networks, which increase abruptly after 5 gates. It may be that 5 is the upper limit on the number of successive gates the network can learn as a single composite operation, and thereafter it is forced to apply an iterative algorithm. # 3.3 Addition
1603.08983#30
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
31
# 3.3 Addition The addition task presents the network with a input sequence of 1 to 5 size 50 input vectors. Each vector represents a D digit number, where D is drawn randomly from 1 to 5, and each digit is drawn randomly from 0 to 9. The first 10D elements of the vector are a concatenation of one-hot encodings of the D digits in the number, and the remainder of the vector is set to 0. The required output is the cumulative sum of all inputs up to the current one, represented as a set of 6 simultaneous classifications for the 6 possible digits in the sum. There is no target for the first vector in the sequence, as no sums have yet been calculated. Because the previous sum must be carried over by the network, this task again requires the internal state of the network to remain coherent. Each classification is modelled by a size 11 softmax, where the first 10 classes are the digits and the 11th is a special marker used to indicate that the number is complete. An example input-target pair is shown in Figure 11. The network was single-layer LSTM with 512 memory cells. The loss function was the joint cross-entropy of all 6 targets at each time-step where targets were present and the minibatch size 10
1603.08983#31
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
32
10 07 0.30 Time Penalty —"oooor 0.6 0.25 2 2 © 05 Ch os [aa a 8 6 e 04 i 0.15 w w 8 8 0.3 0.10 fat c 3 g a iow oO 0.2 o 0.05 n n 0.0 oO 200000 400000 600000 800000 1000000 7 o 2 4 6 8 10 12 14 —_—o1 i without Act Iterations Ponder Figure 9: Logic Learning Curves and Error Rates Versus Ponder Time. os ‘Time Penalty ° Ponder Sequence Error Rate 7 8 9 10 I 2 3 4 7 8 9 10 56 56 Difficulty Difficulty Figure 10: Logic Ponder Time and Error Rate Versus Input Difficulty. ‘Difficulty’ is the number of logic gates in each input vector; all sequences were length 5. 1}/3}/6 i $ 0//9)/8 3/18 3/|2||4| —— alls 8/|-]/5 «Ilo -}[-} [0 alfa nput seq. Target seq.
1603.08983#32
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
33
Figure 11: Addition training Example. Each digit in the input sequence is represented by a size 10 one hot encoding. Unused input digits, marked ‘-’, are represented by a vector of 10 zeros. The black vector at the start of the target sequence indicates that no target was required for that step. The target digits are represented as 1-of-11 classes, where the 11th class, marked ‘*’, is used for digits beyond the end of the target number. 11 se eeeesepeeeggegeeeexecpeeganzeeepegare s§ssssss88s88s88e8s8s8s88s8sesesgssesg8eggge¢8gsgsgegegseX eeeegsgse 8 8G e&sgsSée sé 6S SF SF 6 SC oe oo 8 8 ° $6666 6 5 6 6 2 Time Penalty Figure 12: Addition Error Rates.
1603.08983#33
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
34
Figure 12: Addition Error Rates. 1.0 07 Time Penalty — 00001 — 0.0002 06 — 0.0003 08 — 00004 — 0.0005 v Lv = cocoe oS _ os o 0.0007 fed e — 0.0008 — 0.0009 L Lo — 0001 £ 0.6 £ o4 G02 — 0003 uu Ww — 0.004 o © 03 —— 0.005 Vv Vv 0.006 S04 ij 0.007 o oO 0.008 3 302 0009 fou fog — oo oO oO — 00 eA) un — 003 0.2 0.1 boa — 005 — 006 0.0 EDO D000=-0-=2 ND OO + er 0.0 — 009 oO 200000 400000 600000 800000 1000000 oO 2 4 6 8 10 2 14 —o jl = without act Iterations Ponder Figure 13: Addition Learning Curves and Error Rates Versus Ponder Time. was 32. The maximum ponder M was set to 20 for this task, as it was found that some networks had very high ponder times early in training.
1603.08983#34
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
35
was 32. The maximum ponder M was set to 20 for this task, as it was found that some networks had very high ponder times early in training. The results in Figure 12 show that the task was perfectly solved by the ACT networks for all values of τ in the grid search. Unusually, networks with higher τ solved the problem with fewer training examples. Figure 14 demonstrates that the relationship between the ponder time and the number of digits was approximately linear for most of the ACT networks, and that for the most efficient networks (with the highest τ values) the slope of the line was close to 1, which matches our expectations that an efficient long addition algorithm should need one computation step per digit. Figure 15 shows how the ponder time is distributed during individual addition sequences, providing further evidence of an approximately linear-time long addition algorithm. # 3.4 Sort
1603.08983#35
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
36
# 3.4 Sort The sort task requires the network to sort sequences of 2 to 15 numbers drawn from a standard normal distribution in ascending order. The experiments considered so far have been designed to favour ACT by compressing sequential information into single vectors, and thereby requiring the use of multiple computation steps to unpack them. For the sort task a more natural sequential representation was used: the random numbers were presented one at a time as inputs, and the required output was the sequence of indices into the number sequence placed in sorted order; an example is shown in Figure 16. We were particularly curious to see how the number of ponder steps scaled with the number of elements to be sorted, knowing that efficient sorting algorithms have O(N log N ) computational cost. The network was single-layer LSTM with 512 cells. The output layer was a size 15 softmax, 12 ‘Time Penalty Sequence Error Rate 0 0.0 = ———— 3 3 7 : Difficulty Difficulty — witouract Figure 14: Addition Ponder Time and Error Rate Versus Input Difficulty. ‘Difficulty’ is the number of digits in each input vector; all sequences were length 3.
1603.08983#36
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
37
ee StS edHHEH e EHHEG HEECHG) 1 1 1 2 °9 «9 1 o1 41 4 9 3 Outputs CO RRnnnaT astelet aneaeb EEE O10 1 8 0 SSIS HHH OHIHEUG GE -HIHIGHHI-HE2 HITE OHIELEHG 8 6 4 Ofte tees 5 5 8 0 6 Fae ieee eee eee . oO mo] Hee 2 7 8 Auiey 7 4 5 5 Inputs 3 5 4 0 9 a 6 1 en 1 9 6 6 3 paauncain) 2 8 8 0 3 4]/4 1 +0 3 2 6]/0 3 6 8 6 3 Figure 15: Ponder Time During Three Addition Sequences. The input sequence is shown along the bottom x-axis and the network output sequence is shown along the top x-axis. The ponder time ρt at each input step is shown by the black lines; the actual number of computational steps taken at each point is ρt rounded up to the next integer. The grey lines show the total number of digits in the two numbers being summed at each step; this appears to give a rough lower bound on the ponder time, suggesting an internal algorithm that is approximately linear in the number of digits. All plots were created using the same network, trained with τ = 9e−4.
1603.08983#37
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
38
trained with cross-entropy to classify the indices of the sorted inputs. The minibatch size was 16. Figure 17 shows that the advantage of using ACT is less dramatic for this task than the previous three, but still substantial (from around 12% error without ACT to around 6% for the best τ value). However from Figure 18 it is clear that these gains come at a heavy computational cost, with the best networks requiring roughly 9 times as much computation as those without ACT. Not surprisingly, It Figure 19 shows that the error rate grew rapidly with the sequence length for all networks. also indicates that the better networks had a sublinear growth in computations per input step with sequence length, though whether this indicates a logarithmic time algorithm is unclear. One problem with the sort task was that the Gaussian samples were sometimes very close together, making it hard for the network to determine which was greater; enforcing a minimum separation between successive values would probably be beneficial. Figure 20 shows the ponder time during three sort sequences of varying length. As can be seen, there is a large spike in ponder time near (though not precisely at) the end of the input sequence, presumably when the majority of the sort comparisons take place. Note that the spike is much higher for the longer two sequences than the length 5 one, again pointing to an algorithm that is nonlinear 13 wo 0.08 108 029 055 TTT | — Hil Input seq. Target seq.
1603.08983#38
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
39
13 wo 0.08 108 029 055 TTT | — Hil Input seq. Target seq. Figure 16: Sort training Example. Each size 2 input vector consists of one real number and one binary flag to indicate the end of sequence to be sorted; inputs following the sort sequence are set to zero and marked in black. No targets are present until after the sort sequence; thereafter the size 15 target vectors represent the sorted indices of the input sequence. oy O14 ® 012 Bo £0.10 Ww . 0.08 oT PTT TTT 0.06 & is se 228 85 Benge egaeeveseeeaganzeeereaaer sss s 8 88 8 888 8s 8 8S SS se eEgEseEgE gs Fs sR g~ssesesgsesgesgpgeegesee S&S &S FSF FS FS ssseEsSs ss Ss Ss < gessgsegesgegsesse686 8686 8 $s6e8e8se8e88e88 g Time Penalty Figure 17: Sort Error Rates.
1603.08983#39
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
40
Figure 17: Sort Error Rates. 0.20 Time Penalty — 00001 = 0.0002 — 0.0003 . — 0.0004 — 00005 Lv 2 = doves oO Mois * = a0 fad fad — 0.0008 — 0.0009 50 5 = boon £ £ 0002 0003 u uw Fe 0.004 o o 0005 Q QO — 0006 c © 0.10 — 0007 G © oO — 0008 3 3 — 0009 fom fom — oo oO o — on A) wn — 003 — 00s — 005 0.05 — 006 — oo — 003 0.0 — 009 0 200000 400000 -~=—600000 = 8000001000000 0 2 4 6 8 10 R 01 7 = without act Iterations Ponder Figure 18: Sort Learning Curves and Error Rates Versus Ponder Time. Ponder Sequence Error Rate 0 2 14 2 4 0 2 4 8 Ft 8 1 Difficulty Difficulty Figure 19: Sort Ponder Time and Error Rate Versus Input Difficulty. sorted. ‘Difficulty’ is the length of the sequence to be 14
1603.08983#40
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
41
Figure 19: Sort Ponder Time and Error Rate Versus Input Difficulty. sorted. ‘Difficulty’ is the length of the sequence to be 14 Outputs 2 8117 00 053 180 064 041 065 024 020 090 051 140 067 034 Outputs Outputs 2 2 Ponder 834133 005 077 097 098 059 097 -0.82 1.74 22 0.83 155 025 0.64 inputs Inputs Figure 20: Ponder Time During Three Sort Sequences. The input sequences to be sorted are shown along the bottom x-axes and the network output sequences are shown along the top x-axes. All plots created using the same network, trained with τ = 10−3. 1.60 . o gue T 156 2 154 aan (4 ao 1.50 ga 299+ 2 © & @ e@eam se oer Bae ssssssss8eegse € §& §& §& FEE se<eseeseegesesescsssesé6 ss 6s 6 < 666 6 66 6 6S 6 ° 2 Time Penalty Figure 21: Wikipedia Error Rates. in sequence length (the average ponder per timestep is nonetheless lower for longer sequences, as little pondering is done away from the spike.). # 3.5 Wikipedia Character Prediction
1603.08983#41
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
42
in sequence length (the average ponder per timestep is nonetheless lower for longer sequences, as little pondering is done away from the spike.). # 3.5 Wikipedia Character Prediction The Wikipedia task is character prediction on text drawn from the Hutter prize Wikipedia dataset [15]. Following previous RNN experiments on the same data [8], the raw unicode text was used, including XML tags and markup characters, with one byte presented per input timestep and the next byte predicted as a target. No validation set was used for early stopping, as the networks were unable to overfit the data, and all error rates are recorded on the training set. Sequences of 500 consecutive bytes were randomly chosen from the training set and presented to the network, whose internal state was reset to 0 at the start of each sequence.
1603.08983#42
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
43
LSTM networks were used with a single layer of 1500 cells and a size 256 softmax classification layer. As can be seen from Figures 21 and 22, the error rates are fairly similar with and without ACT, and across values of τ (although the learning curves suggest that the ACT networks are somewhat more data efficient). Furthermore the amount of ponder per input is much lower than for the other problems, suggesting that the advantages of extra computation were slight for this task. However Figure 23 reveals an intriguing pattern of ponder allocation while processing a sequence. Character prediction networks trained with ACT consistently pause at spaces between words, and pause for longer at ‘boundary’ characters such as commas and full stops. We speculate that the extra computation is used to make predictions about the next ‘chunk’ in the data (word, sentence, clause), much as humans have been found to do in self-paced reading experiments [16]. This suggests that ACT could be useful for inferring implicit boundaries or transitions in sequence data. Alternative measures for inferring transitions include the next-step prediction loss and predictive entropy, both of which tend to increase during harder predictions. However, as can be seen from the figure, they 15
1603.08983#43
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
44
15 2.2 1.80 24 175 — 0003 = coos - . = aces . = avee G20 5 : ares 2 ra £ — coos U 5 0 = 00s K 15 o — oor 2 c ote GS ro — 003 ~ — 0s 5 18 o ts g a — 00s a — oor v 2 = ace 2a Pa = 00s a —o1 — Without act 16 145 15 1.40 2000 3000 4000 5000 6000 7000 8000 9000 10000 0.9 1.0 11 12 13 14 15 16 17 Iterations Ponder Figure 22: Wikipedia Learning Curves (Zoomed) and Error Rates Versus Ponder Time. Entropy (bits) and the many people caught in the middle of the two. In recent history, with scientists learning Loss (bits) and the many people caught in the middle of the two. In recent history, with scientists learning Ponder and the many people caught in the middle of the two. In recent history, with scientists learning Figure 23: Ponder Time, Prediction loss and Prediction Entropy During a Wikipedia Text Sequence. Plot created using a network trained with τ = 6e−3
1603.08983#44
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
45
Figure 23: Ponder Time, Prediction loss and Prediction Entropy During a Wikipedia Text Sequence. Plot created using a network trained with τ = 6e−3 are a less reliable indicator of boundaries, and are not likely to increase at points such as full stops and commas, as these are invariably followed by space characters. More generally, loss and entropy only indicate the difficulty of the current prediction, not the degree to which the current input is likely to impact future predictions. Furthermore Figure 24 reveals that, as well as being an effective detector of non-text transition markers such as the opening brackets of XML tags, ACT does not increase computation time during random or fundamentally unpredictable sequences like the two ID numbers. This is unsurprising, as doing so will not improve its predictions. In contrast, both entropy and loss are inevitably high for unpredictable data. We are therefore hopeful that computation time will provide a better way to distinguish between structure and noise (or at least data perceived by the network as structure or noise) than existing measures of predictive difficulty. # 4 Conclusion This paper has introduced Adaptive Computation time (ACT), a method that allows recurrent neural networks to learn how many updates to perform for each input they receive. Experiments on 16
1603.08983#45
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
46
# 4 Conclusion This paper has introduced Adaptive Computation time (ACT), a method that allows recurrent neural networks to learn how many updates to perform for each input they receive. Experiments on 16 Entropy (bits) » United States security treaty</title> <id>1157</id> <revision> <id>15899658</id> a; rand Ba 32 be United States security treaty</title> <id>1157</id> <revision> <id>15899658</id> Be Boo gis gs » United States security treaty</title> <id>1157</id> <revision> <id>15899658</id> Figure 24: Ponder Time, Prediction loss and Prediction Entropy During a Wikipedia Sequence Containing XML Tags. Created using the same network as Figure 23. synthetic data prove that ACT can make otherwise inaccessible problems straightforward for RNNs to learn, and that it is able to dynamically adapt the amount of computation it uses to the demands of the data. An experiment on real data suggests that the allocation of computation steps learned by ACT can yield insight into both the structure of the data and the computational demands of predicting it. ACT promises to be particularly interesting for recurrent architectures containing soft attention modules [2, 10, 34, 12], which it could enable to dynamically adapt the number of glances or internal operations they perform at each time-step.
1603.08983#46
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]
1603.08983
47
One weakness of the current algorithm is that it is quite sensitive to the time penalty parameter that controls the relative cost of computation time versus prediction error. An important direction for future work will be to find ways of automatically determining and adapting the trade-off between accuracy and speed. # Acknowledgments The author wishes to thank Ivo Danihleka, Greg Wayne, Tim Harley, Malcolm Reynolds, Jacob Menick, Oriol Vinyals, Joel Leibo, Koray Kavukcuoglu and many others on the DeepMind team for valuable comments and suggestions, as well as Albert Zeyer, Martin Abadi, Dario Amodei, Eugene Brevdo and Christopher Olah for pointing out the discontinuity in the ponder cost, which was erroneously described as smooth in an earlier version of the paper. # References [1] G. An. The effects of adding noise during backpropagation training on a generalization perfor- mance. Neural Computation, 8(3):643–674, 1996. [2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. abs/1409.0473, 2014.
1603.08983#47
Adaptive Computation Time for Recurrent Neural Networks
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
http://arxiv.org/pdf/1603.08983
Alex Graves
cs.NE
null
null
cs.NE
20160329
20170221
[ { "id": "1502.04623" }, { "id": "1603.08575" }, { "id": "1511.06279" }, { "id": "1511.06297" }, { "id": "1507.01526" }, { "id": "1511.06391" } ]