id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1703.00441#17 | Learning to Optimize Neural Nets | = that Ï ( at| st, t; η) The := (Kt, kt, Gt)T N (Ktst + kt, Gt), where η t=1 and Ï (ot), Î£Ï ), where θ := (Ï , Î£Ï ) Ï ( at| ot; θ) = N (ÂµÏ and ÂµÏ Ï (·) can be an arbitrary function that is typically modelled using a nonlinear function approximator like a neural net. The mean of the policy is modelled as a recurrent neural net fragment that corresponds to a single time step, which takes the observation features Ψ(·) and the previous mem- ory state as input and outputs the step to take. # 3.4. Guided Policy Search The reinforcement learning method we use is guided pol- icy search (GPS) (Levine et al., 2015), which is a policy search method designed for searching over large classes of expressive non-linear policies in continuous state and ac- tion spaces. It maintains two policies, Ï and Ï , where the former lies in a time-varying linear policy class in which the optimal policy can found in closed form, and the latter lies in a stationary non-linear policy class in which policy the algorithm con- At structs a model of the transition probability density Ë p ( st+1| st, at, t; ζ) = N (Atst+Btat+ct, Ft), where ζ := (At, Bt, ct, Ft)T t=1 is ï¬ tted to samples of st drawn from the trajectory induced by Ï , which essentially amounts to a local linearization of the true transition probability p ( st+1| st, at, t). We will use E Ë Ï [·] to denote expecta- tion taken with respect to the trajectory induced by Ï under 2In practice, the explicit form of the observation probability po is usually not known or the integral may be intractable to compute. So, a linear Gaussian model is ï¬ tted to samples of st and at and used in place of the true Ï ( at| st; θ) where necessary. 3Though the Bregman divergence penalty is applied to the original probability distributions over at. | 1703.00441#16 | 1703.00441#18 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#18 | Learning to Optimize Neural Nets | Learning to Optimize Neural Nets the modelled transition probability Ë p. Additionally, the al- gorithm ï¬ ts local quadratic approximations to c(st) around samples of st drawn from the trajectory induced by Ï so that c(st) â Ë c(st) := 1 t st + ht for stâ s that are near the samples. spaces. For example, in the case of GPS, because the run- ning time of LQG is cubic in dimensionality of the state space, performing policy search even in the simple class of linear-Gaussian policies would be prohibitively expen- sive when the dimensionality of the optimization problem is high. With these assumptions, the subproblem that needs to be solved to update η = (Kt, kt, Gt)T Tr min >? E; [é(sx) - Mai +â ¢%Dz (n, 6) t=0 T s.t. SOE; [Die (w (az| 82, t;7) ||» (a,| st,t;7/))| <e, t=0 where 77â denotes the old 7 from the previous iteration. Be- cause p and @ are only valid locally around the trajectory induced by ~, the constraint is added to limit the amount by which 77 is updated. It turns out that the unconstrained prob- lem can be solved in closed form using a dynamic program- ming algorithm known as linear-quadratic-Gaussian (LQG) regulator in time linear in the time horizon T' and cubic in the dimensionality of the state space D. The constrained problem is solved using dual gradient descent, which uses LQG as a subroutine to solve for the primal variables in each iteration and increments the dual variable on the con- straint until it is satisfied. Updating θ is straightforward, since expectations taken with respect to the trajectory induced by Ï are always con- ditioned on st and all outer expectations over st are taken with respect to the trajectory induced by Ï | 1703.00441#17 | 1703.00441#19 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#19 | Learning to Optimize Neural Nets | . Therefore, Ï is essentially decoupled from the transition probabil- ity p ( st+1| st, at, t) and so its parameters can be updated without affecting the distribution of stâ s. The subproblem that needs to be solved to update θ therefore amounts to a standard supervised learning problem. Since Ï ( at| st, t; η) and Ï ( at| st; θ) are Gaussian, D (θ, η) can be computed analytically. More concretely, if we assume Î£Ï to be ï¬ xed for simplicity, the subproblem that is solved for updating θ = (Ï , Î£Ï ) is: T . 7 â A lyr 7 minEy > At HE (08) + z (tr (G7*E ) â log |=*|) +24 (u5(0t) â Ey [ail set)â Gr? (WS (or) â By [ael se, | Note that the last term is the squared Mahalanobis distance between the mean actions of Ï and Ï at time step t, which is intuitive as we would like to encourage Ï to match Ï . | 1703.00441#18 | 1703.00441#20 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#20 | Learning to Optimize Neural Nets | Fortunately, many high-dimensional optimization prob- lems have underlying structure that can be exploited. For example, the parameters of neural nets are equivalent up to permutation among certain coordinates. More concretely, for fully connected neural nets, the dimensions of a hidden layer and the corresponding weights can be permuted ar- bitrarily without changing the function they compute. Be- cause permuting the dimensions of two adjacent layers can permute the weight matrix arbitrarily, an optimization algo- rithm should be invariant to permutations of the rows and columns of a weight matrix. A reasonable prior to impose is that the algorithm should behave in the same manner on all coordinates that correspond to entries in the same ma- trix. That is, if the values of two coordinates in all cur- rent and past gradients and iterates are identical, then the step vector produced by the algorithm should have identi- cal values in these two coordinates. We will refer to the set of coordinates on which permutation invariance is en- forced as a coordinate group. For the purposes of learning an optimization algorithm for neural nets, a natural choice would be to make each coordinate group correspond to a weight matrix or a bias vector. Hence, the total number of coordinate groups is twice the number of layers, which is usually fairly small. In the case of GPS, we impose this prior on both w and 7. For the purposes of updating 7, we first impose a block- diagonal structure on the parameters A;, B, and F; of the fitted transition probability density p (s:41| s:,42,t;¢) = N(Atse + Brat + ce, Fi), so that for each coordinate in the optimization problem, the dimensions of s;4 1 that cor- respond to the coordinate only depend on the dimensions of s; and a, that correspond to the same coordinate. As a result, p ($:41| Sz, at, t;¢) decomposes into multiple inde- pendent probability densities p/ (sha| sl, ai, t; @â ), one for each coordinate 7. Similarly, we also impose a block- diagonal structure on C; for fitting ¢(s;) and on the pa- rameter matrix of the fitted model for 7 (a;| s,;0). Under these assumptions, Aâ , and G;, are guaranteed to be block- diagonal as well. | 1703.00441#19 | 1703.00441#21 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#21 | Learning to Optimize Neural Nets | Hence, the Bregman divergence penalty term, D (7,6) decomposes into a sum of Bregman diver- gence terms, one for each coordinate. # 3.5. Convolutional GPS The problem of learning high-dimensional optimization al- gorithms presents challenges for reinforcement learning al- gorithms due to high dimensionality of the state and action We then further constrain dual variables λt, sub-vectors of parameter vectors and sub-matrices of parameter matri- ces corresponding to each coordinate group to be identical across the group. Additionally, we replace the weight νt on D (η, θ) with an individual weight on each Bregman Learning to Optimize Neural Nets | 1703.00441#20 | 1703.00441#22 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#22 | Learning to Optimize Neural Nets | (a) (b) (c) Figure 1. Comparison of the various hand-engineered and learned algorithms on training neural nets with 48 input and hidden units on (a) TFD, (b) CIFAR-10 and (c) CIFAR-100 with mini-batches of size 64. The vertical axis is the true objective value and the horizontal axis represents the iteration. Best viewed in colour. divergence term for each coordinate group. The problem then decomposes into multiple independent subproblems, one for each coordinate group. Because the dimensionality of the state subspace corresponding to each coordinate is constant, LQG can be executed on each subproblem much more efï¬ | 1703.00441#21 | 1703.00441#23 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#23 | Learning to Optimize Neural Nets | ciently. 25 e {VF(e)/ (| VF(em@-564) tmoasy)| +4 1)} 24 ha i=0 { |[2GnaxGâ 8G4D) tmods+5)) _p@max(tâ 5(i+2),tmods)) . wâ 5i) â @(t=90FD))] 40,1 Similarly, for Ï , we choose a ÂµÏ Ï (·) that shares parameters across different coordinates in the same group. We also impose a block-diagonal structure on Î£Ï and constrain the appropriate sub-matrices to share their entries. Note that all operations are applied element-wise. | 1703.00441#22 | 1703.00441#24 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#24 | Learning to Optimize Neural Nets | Also, whenever a feature becomes undeï¬ ned (i.e.: when the time step index becomes negative), it is replaced with the all- zeros vector. # 3.6. Features We describe the features Φ(·) and Ψ(·) at time step t, which deï¬ ne the state st and observation ot respectively. Unlike state features, which are only used when training the optimization algorithm, observation features Ψ(·) are used both during training and at test time. Consequently, we use noisier observation features that can be computed more efï¬ ciently and require less memory overhead. The observation features consist of the following: Because of the stochasticity of gradients and objective val- ues, the state features ®(-) are defined in terms of sum- mary statistics of the history of iterates {2 gradi- * t yt ents {VF} and objective values {fe} . i=0 i=0 We define the following statistics, which we will refer to as the average recent iterate, gradient and objective value respectively: e (f@) ~ fw) ica © VF(@)/(|VFeemâ ¢â 2)| +1) [a (mare(#â 2,1)) _ p(max(tâ 2,0)) | . a@)â 2(-D] 40.1 i) 1 7 j ets min(i+1,3) Dj =max(iâ 2,0) ol) eo VE(e®) = BINGE) Lj=max(éâ 2,0) Vi (2) * £2) = gagrtsy Cj-maxce2,0) fe) # 4. Experiments For clarity, we will refer to training of the optimization algorithm as â meta-trainingâ to differentiate it from base- level training, which will simply be referred to as â train- ingâ . The state features Φ(·) consist of the relative change in the average recent objective value, the average recent gradient normalized by the magnitude of the a previous average re- cent gradient and a previous change in average recent iter- ate relative to the current change in average recent iterate: © {FEM â FED) FEM} 24 i= | 1703.00441#23 | 1703.00441#25 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#25 | Learning to Optimize Neural Nets | We meta-trained an optimization algorithm on a single ob- jective function, which corresponds to the problem of train- ing a two-layer neural net with 48 input units, 48 hidden units and 10 output units on a randomly projected and nor- malized version of the MNIST training set with dimension- ality 48 and unit variance in each dimension. We modelled the optimization algorithm using an recurrent neural net i=0 Learning to Optimize Neural Nets (a) (b) (c) Figure 2. Comparison of the various hand-engineered and learned algorithms on training neural nets with 100 input units and 200 hidden units on (a) TFD, (b) CIFAR-10 and (c) CIFAR-100 with mini-batches of size 64. The vertical axis is the true objective value and the horizontal axis represents the iteration. | 1703.00441#24 | 1703.00441#26 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#26 | Learning to Optimize Neural Nets | Best viewed in colour. (a) (b) (c) Figure 3. Comparison of the various hand-engineered and learned algorithms on training neural nets with 48 input and hidden units on (a) TFD, (b) CIFAR-10 and (c) CIFAR-100 with mini-batches of size 10. The vertical axis is the true objective value and the horizontal axis represents the iteration. Best viewed in colour. with a single layer of 128 LSTM (Hochreiter & Schmid- huber, 1997) cells. We used a time horizon of 400 itera- tions and a mini-batch size of 64 for computing stochas- tic gradients and objective values. We evaluate the opti- mization algorithm on its ability to generalize to unseen objective functions, which correspond to the problems of training neural nets on different tasks/datasets. We evalu- ate the learned optimization algorithm on three datasets, the Toronto Faces Dataset (TFD), CIFAR-10 and CIFAR-100. These datasets are chosen for their very different character- istics from MNIST and each other: TFD contains 3300 grayscale images that have relatively little variation and has seven different categories, whereas CIFAR-100 con- tains 50,000 colour images that have varied appearance and has 100 different categories. All algorithms are tuned on the training objective function. For hand-engineered algorithms, this entails choosing the best hyperparameters; for learned algorithms, this entails meta-training on the objective function. We compare to the seven hand-engineered algorithms: stochastic gradient de- scent, momentum, conjugate gradient, L-BFGS, ADAM, AdaGrad and RMSprop. In addition, we compare to an optimization algorithm meta-trained using the method de- scribed in (Andrychowicz et al., 2016) on the same train- ing objective function (training two-layer neural net on ran- domly projected and normalized MNIST) under the same setting (a time horizon of 400 iterations and a mini-batch size of 64). First, we examine the performance of various optimization algorithms on similar objective functions. The optimiza- tion problems under consideration are those for training neural nets that have the same number of input and hidden units (48 and 48) as those used during meta-training. The number of output units varies with the number of categories in each dataset. | 1703.00441#25 | 1703.00441#27 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#27 | Learning to Optimize Neural Nets | We use the same mini-batch size as that used during meta-training. As shown in Figure 1, the opti- mization algorithm meta-trained using our method (which we will refer to as Predicted Step Descent) consistently de- scends to the optimum the fastest across all datasets. On the other hand, other algorithms are not as consistent and the relative ranking of other algorithms varies by dataset. This suggests that Predicted Step Descent has learned to be robust to variations in the data distributions, despite be- ing trained on only one objective function, which is associ- ated with a very speciï¬ c data distribution that character- izes MNIST. It is also interesting to note that while the | 1703.00441#26 | 1703.00441#28 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#28 | Learning to Optimize Neural Nets | Learning to Optimize Neural Nets (a) (b) (c) Figure 4. Comparison of the various hand-engineered and learned algorithms on training neural nets with 100 input units and 200 hidden units on (a) TFD, (b) CIFAR-10 and (c) CIFAR-100 with mini-batches of size 10. The vertical axis is the true objective value and the horizontal axis represents the iteration. Best viewed in colour. (a) (b) (c) Figure 5. Comparison of the various hand-engineered and learned algorithms on training neural nets with 100 input units and 200 hidden units on (a) TFD, (b) CIFAR-10 and (c) CIFAR-100 for 800 iterations with mini-batches of size 64. The vertical axis is the true objective value and the horizontal axis represents the iteration. | 1703.00441#27 | 1703.00441#29 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#29 | Learning to Optimize Neural Nets | Best viewed in colour. algorithm meta-trained using (Andrychowicz et al., 2016) (which we will refer to as L2LBGDBGD) performs well on CIFAR, it is unable to reach the optimum on TFD. Next, we change the architecture of the neural nets and see if Predicted Step Descent generalizes to the new architec- ture. We increase the number of input units to 100 and the number of hidden units to 200, so that the number of pa- rameters is roughly increased by a factor of 8. As shown in Figure 2, Predicted Step Descent consistently outperforms other algorithms on each dataset, despite having not been trained to optimize neural nets of this architecture. Interest- ingly, while it exhibited a bit of oscillation initially on TFD and CIFAR-10, it quickly recovered and overtook other al- gorithms, which is reminiscent of the phenomenon reported in (Li & Malik, 2016) for low-dimensional optimization problems. This suggests that it has learned to detect when it is performing poorly and knows how to change tack ac- cordingly. L2LBGDBGD experienced difï¬ culties on TFD and CIFAR-10 as well, but slowly diverged. from 64 to 10 on both the original architecture with 48 in- put and hidden units and the enlarged architecture with 100 input units and 200 hidden units. As shown in Figure 3, on the original architecture, Predicted Step Descent still out- performs all other algorithms and is able to handle the in- creased stochasticity fairly well. In contrast, conjugate gra- dient and L2LBGDBGD had some difï¬ culty handling the increased stochasticity on TFD and to a lesser extent, on CIFAR-10. In the former case, both diverged; in the latter case, both were progressing slowly towards the optimum. On the enlarged architecture, Predicted Step Descent expe- rienced some signiï¬ cant oscillations on TFD and CIFAR- 10, but still managed to achieve a much better objective value than all the other algorithms. Many hand-engineered algorithms also experienced much greater oscillations than previously, suggesting that the optimization problems are inherently harder. L2LBGDBGD diverged fairly quickly on these two datasets. | 1703.00441#28 | 1703.00441#30 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#30 | Learning to Optimize Neural Nets | We now investigate how robust Predicted Step Descent is to stochasticity of the gradients. To this end, we take a look at its performance when we reduce the mini-batch size Finally, we try doubling the number of iterations. As shown in Figure 5, despite being trained over a time horizon of 400 iterations, Predicted Step Descent behaves reasonably beyond the number of iterations it is trained for. Learning to Optimize Neural Nets # 5. Conclusion In this paper, we presented a new method for learning opti- mization algorithms for high-dimensional stochastic prob- lems. We applied the method to learning an optimization algorithm for training shallow neural nets. We showed that the algorithm learned using our method on the problem of training a neural net on MNIST generalizes to the prob- lems of training neural nets on unrelated tasks/datasets like the Toronto Faces Dataset, CIFAR-10 and CIFAR-100. We also demonstrated that the learned optimization algorithm is robust to changes in the stochasticity of gradients and the neural net architecture. | 1703.00441#29 | 1703.00441#31 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#31 | Learning to Optimize Neural Nets | and Da Costa, Joaquim Pinto. Ranking learning algorithms: Using ibl and meta-learning on accuracy and time results. Machine Learning, 50(3):251â 277, 2003. Daniel, Christian, Taylor, Jonathan, and Nowozin, Sebas- tian. Learning step size controllers for robust neural net- work training. In Thirtieth AAAI Conference on Artiï¬ cial Intelligence, 2016. Domke, Justin. Generic methods for optimization-based modeling. | 1703.00441#30 | 1703.00441#32 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#32 | Learning to Optimize Neural Nets | In AISTATS, volume 22, pp. 318â 326, 2012. # References Andrychowicz, Marcin, Denil, Misha, Gomez, Sergio, Hoffman, Matthew W, Pfau, David, Schaul, Tom, and de Freitas, Nando. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv:1606.04474, 2016. Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121â 2159, 2011. Feurer, Matthias, Springenberg, Jost Tobias, and Hutter, Initializing bayesian hyperparameter optimiza- Frank. tion via meta-learning. In AAAI, pp. 1128â 1135, 2015. Baxter, Jonathan, Caruana, Rich, Mitchell, Tom, Pratt, Lorien Y, Silver, Daniel L, and Thrun, Sebastian. | 1703.00441#31 | 1703.00441#33 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#33 | Learning to Optimize Neural Nets | NIPS 1995 workshop on learning to learn: Knowledge con- solidation and transfer in inductive systems. https: //web.archive.org/web/20000618135816/ http://www.cs.cmu.edu/afs/cs.cmu.edu/ user/caruana/pub/transfer.html, 1995. Accessed: 2015-12-05. Fu, Jie, Lin, Zichuan, Liu, Miao, Leonard, Nicholas, Feng, Jiashi, and Chua, Tat-Seng. | 1703.00441#32 | 1703.00441#34 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#34 | Learning to Optimize Neural Nets | Deep q-networks for acceler- ating the training of deep neural networks. arXiv preprint arXiv:1606.01467, 2016. Gregor, Karol and LeCun, Yann. Learning fast approxima- tions of sparse coding. In Proceedings of the 27th Inter- national Conference on Machine Learning (ICML-10), pp. 399â 406, 2010. Bengio, Y, Bengio, S, and Cloutier, J. Learning a synaptic In Neural Networks, 1991., IJCNN-91- learning rule. Seattle International Joint Conference on, volume 2, pp. 969â vol. IEEE, 1991. Hansen, Samantha. | 1703.00441#33 | 1703.00441#35 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#35 | Learning to Optimize Neural Nets | Using deep q-learning to con- arXiv preprint trol optimization hyperparameters. arXiv:1602.04062, 2016. Bengio, Yoshua. Gradient-based optimization of hyperpa- rameters. Neural computation, 12(8):1889â 1900, 2000. Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short- term memory. Neural computation, 9(8):1735â 1780, 1997. Bergstra, James and Bengio, Yoshua. Random search for hyper-parameter optimization. The Journal of Machine Learning Research, 13(1):281â 305, 2012. Bergstra, James S, Bardenet, R´emi, Bengio, Yoshua, and K´egl, Bal´azs. Algorithms for hyper-parameter optimiza- tion. In Advances in Neural Information Processing Sys- tems, pp. 2546â 2554, 2011. Bray, M, Koller-Meier, E, Muller, P, Van Gool, L, and Schraudolph, NN. 3D hand tracking by rapid stochas- In Visual tic gradient descent using a skinning model. Media Production, 2004.(CVMP). 1st European Confer- ence on, pp. 59â 68. IET, 2004. | 1703.00441#34 | 1703.00441#36 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#36 | Learning to Optimize Neural Nets | Hochreiter, Sepp, Younger, A Steven, and Conwell, Pe- ter R. Learning to learn using gradient descent. In Inter- national Conference on Artiï¬ cial Neural Networks, pp. 87â 94. Springer, 2001. Hutter, Frank, Hoos, Holger H, and Leyton-Brown, Kevin. Sequential model-based optimization for general algo- In Learning and Intelligent Opti- rithm conï¬ guration. mization, pp. 507â 523. Springer, 2011. Kingma, Diederik and Ba, Jimmy. method for stochastic optimization. arXiv:1412.6980, 2014. A arXiv preprint Adam: Brazdil, Pavel, Carrier, Christophe Giraud, Soares, Carlos, and Vilalta, Ricardo. | 1703.00441#35 | 1703.00441#37 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#37 | Learning to Optimize Neural Nets | Metalearning: applications to data mining. Springer Science & Business Media, 2008. Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015. Learning to Optimize Neural Nets Li, Ke and Malik, Jitendra. Learning to optimize. CoRR, abs/1606.01885, 2016. Maclaurin, Dougal, Duvenaud, David, and Adams, Ryan P. Gradient-based hyperparameter optimization through re- arXiv preprint arXiv:1502.03492, versible learning. 2015. Ruvolo, Paul L, Fasel, Ian, and Movellan, Javier R. Op- timization on a budget: A reinforcement learning ap- proach. In Advances in Neural Information Processing Systems, pp. 1385â 1392, 2009. Schmidhuber, J¨urgen. | 1703.00441#36 | 1703.00441#38 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#38 | Learning to Optimize Neural Nets | Optimal ordered problem solver. Machine Learning, 54(3):211â 254, 2004. Snoek, Jasper, Larochelle, Hugo, and Adams, Ryan P. Practical bayesian optimization of machine learning al- gorithms. In Advances in neural information processing systems, pp. 2951â 2959, 2012. Sprechmann, Pablo, Litman, Roee, Yakar, Tal Ben, Bron- stein, Alexander M, and Sapiro, Guillermo. Supervised sparse analysis and synthesis operators. In Advances in Neural Information Processing Systems, pp. 908â 916, 2013. Swersky, Kevin, Snoek, Jasper, and Adams, Ryan P. | 1703.00441#37 | 1703.00441#39 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#39 | Learning to Optimize Neural Nets | Multi- task bayesian optimization. In Advances in neural infor- mation processing systems, pp. 2004â 2012, 2013. Thrun, Sebastian and Pratt, Lorien. Learning to learn. Springer Science & Business Media, 2012. Tieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5- rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012. Vilalta, Ricardo and Drissi, Youssef. A perspective view and survey of meta-learning. | 1703.00441#38 | 1703.00441#40 | 1703.00441 | [
"1606.01467"
]
|
1703.00441#40 | Learning to Optimize Neural Nets | Artiï¬ cial Intelligence Re- view, 18(2):77â 95, 2002. Wang, Huahua and Banerjee, Arindam. Bregman al- CoRR, ternating direction method of multipliers. abs/1306.3203, 2014. | 1703.00441#39 | 1703.00441 | [
"1606.01467"
]
|
|
1702.08734#0 | Billion-scale similarity search with GPUs | 7 1 0 2 b e F 8 2 ] V C . s c [ 1 v 4 3 7 8 0 . 2 0 7 1 : v i X r a # Billion-scale similarity search with GPUs Jeff Johnson Facebook AI Research New York Matthijs Douze Facebook AI Research Paris Herv ´e J ´egou Facebook AI Research Paris ABSTRACT Similarity search ï¬ nds application in specialized database systems handling complex data such as images or videos, which are typically represented by high-dimensional features and require speciï¬ c indexing structures. This paper tackles the problem of better utilizing GPUs for this task. While GPUs excel at data-parallel tasks, prior approaches are bot- tlenecked by algorithms that expose less parallelism, such as k-min selection, or make poor use of the memory hierarchy. We propose a design for k-selection that operates at up to 55% of theoretical peak performance, enabling a nearest neighbor implementation that is 8.5à faster than prior GPU state of the art. | 1702.08734#1 | 1702.08734 | [
"1510.00149"
]
|
|
1702.08734#1 | Billion-scale similarity search with GPUs | We apply it in diï¬ erent similarity search scenarios, by proposing optimized design for brute-force, ap- proximate and compressed-domain search based on product quantization. In all these setups, we outperform the state of the art by large margins. Our implementation enables the construction of a high accuracy k-NN graph on 95 million images from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced our approach1 for the sake of comparison and reproducibility. as the underlying processes either have high arithmetic com- plexity and/or high data bandwidth demands [28], or cannot be eï¬ ectively partitioned without failing due to communi- cation overhead or representation quality [38]. Once pro- duced, their manipulation is itself arithmetically intensive. However, how to utilize GPU assets is not straightforward. More generally, how to exploit new heterogeneous architec- tures is a key subject for the database community [9]. In this context, searching by numerical similarity rather than via structured relations is more suitable. | 1702.08734#0 | 1702.08734#2 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#2 | Billion-scale similarity search with GPUs | This could be to ï¬ nd the most similar content to a picture, or to ï¬ nd the vectors that have the highest response to a linear classiï¬ er on all vectors of a collection. One of the most expensive operations to be performed on large collections is to compute a k-NN graph. It is a directed graph where each vector of the database is a node and each edge connects a node to its k nearest neighbors. This is our ï¬ agship application. | 1702.08734#1 | 1702.08734#3 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#3 | Billion-scale similarity search with GPUs | Note, state of the art methods like NN-Descent [15] have a large memory overhead on top of the dataset itself and cannot readily scale to the billion-sized databases we consider. # INTRODUCTION Images and videos constitute a new massive source of data for indexing and search. Extensive metadata for this con- tent is often not available. Search and interpretation of this and other human-generated content, like text, is diï¬ cult and important. A variety of machine learning and deep learn- ing algorithms are being used to interpret and classify these complex, real-world entities. Popular examples include the text representation known as word2vec [32], representations of images by convolutional neural networks [39, 19], and im- age descriptors for instance search [20]. Such representations or embeddings are usually real-valued, high-dimensional vec- tors of 50 to 1000+ dimensions. Many of these vector repre- sentations can only eï¬ ectively be produced on GPU systems, | 1702.08734#2 | 1702.08734#4 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#4 | Billion-scale similarity search with GPUs | 1https://github.com/facebookresearch/faiss Such applications must deal with the curse of dimension- ality [46], rendering both exhaustive search or exact index- ing for non-exhaustive search impractical on billion-scale databases. This is why there is a large body of work on approximate search and/or graph construction. To handle huge datasets that do not ï¬ t in RAM, several approaches employ an internal compressed representation of the vec- tors using an encoding. This is especially convenient for memory-limited devices like GPUs. It turns out that accept- ing a minimal accuracy loss results in orders of magnitude of compression [21]. The most popular vector compression methods can be classiï¬ ed into either binary codes [18, 22], or quantization methods [25, 37]. Both have the desirable property that searching neighbors does not require recon- structing the vectors. Our paper focuses on methods based on product quanti- zation (PQ) codes, as these were shown to be more eï¬ ective than binary codes [34]. In addition, binary codes incur im- portant overheads for non-exhaustive search methods [35]. Several improvements were proposed after the original prod- uct quantization proposal known as IVFADC [25]; most are diï¬ cult to implement eï¬ ciently on GPU. For instance, the inverted multi-index [4], useful for high-speed/low-quality operating points, depends on a complicated â multi-sequenceâ algorithm. The optimized product quantization or OPQ [17] is a linear transformation on the input vectors that improves the accuracy of the product quantization; it can be applied | 1702.08734#3 | 1702.08734#5 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#5 | Billion-scale similarity search with GPUs | 1 as a pre-processing. The SIMD-optimized IVFADC imple- mentation from [2] operates only with sub-optimal parame- ters (few coarse quantization centroids). Many other meth- ods, like LOPQ and the Polysemous codes [27, 16] are too complex to be implemented eï¬ ciently on GPUs. There are many implementations of similarity search on GPUs, but mostly with binary codes [36], small datasets [44], or exhaustive search [14, 40, 41]. To the best of our knowl- edge, only the work by Wieschollek et al. [47] appears suit- able for billion-scale datasets with quantization codes. This is the prior state of the art on GPUs, which we compare against in Section 6.4. This paper makes the following contributions: | 1702.08734#4 | 1702.08734#6 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#6 | Billion-scale similarity search with GPUs | â ¢ a GPU k-selection algorithm, operating in fast register memory and ï¬ exible enough to be fusable with other kernels, for which we provide a complexity analysis; â ¢ a near-optimal algorithmic layout for exact and ap- proximate k-nearest neighbor search on GPU; â ¢ a range of experiments that show that these improve- ments outperform previous art by a large margin on mid- to large-scale nearest-neighbor search tasks, in single or multi-GPU conï¬ gurations. | 1702.08734#5 | 1702.08734#7 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#7 | Billion-scale similarity search with GPUs | The paper is organized as follows. Section 2 introduces the context and notation. Section 3 reviews GPU archi- tecture and discusses problems appearing when using it for similarity search. Section 4 introduces one of our main con- tributions, i.e., our k-selection method for GPUs, while Sec- tion 5 provides details regarding the algorithm computation layout. Finally, Section 6 provides extensive experiments for our approach, compares it to the state of the art, and shows concrete use cases for image collections. # 2. PROBLEM STATEMENT We are concerned with similarity search in vector collec- tions. Given the query vector x â ¬ R? and the collectio: [yilisore (Yi â ¬ Râ ), we search: L = k-argmin,_o.¢|/¢ â yi| 2, (1) i.e., we search the k nearest neighbors of x in terms of L2 distance. The L2 distance is used most often, as it is op- timized by design when learning several embeddings (e.g., [20]), due to its attractive linear algebra properties. The lowest distances are collected by k-selection. For an array [ai]i=o:c, k-selection finds the k lowest valued elements [as;Jiso:k, @s; < Gs;,,, along with the indices [s;J]i=0:%, 0 < 8; < 4, of those elements from the input array. The a; will be 32-bit floating point values; the s; are 32- or 64-bit integers. Other comparators are sometimes desired; e.g., for cosine similarity we search for highest values. The order between equivalent keys as; = @s,; is not specified. | 1702.08734#6 | 1702.08734#8 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#8 | Billion-scale similarity search with GPUs | Batching. Typically, searches are performed in batches of nq query vectors [17;]j=0:n, (ej â ¬ R*) in parallel, which allows for more flexibility when executing on multiple CPU threads or on GPU. Batching for k-selection entails selecting Nq X k elements and indices from nq separate arrays, where each array is of a potentially different length ¢; > k. ?To avoid clutter in 0-based indexing, we use the array no- tation 0: £ to denote the range {0 â 1} inclusive. 2 Exact search. The exact solution computes the full pair- wise distance matrix D = [||xj â Yill3]j=0:ng,i=020 â | 1702.08734#7 | 1702.08734#9 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#9 | Billion-scale similarity search with GPUs | ¬ RX! In practice, we use the decomposition Ilxj â yell = |lxall? + llyill? â 2(a3,.m). (2) The two first terms can be precomputed in one pass over the matrices X and Y whose rows are the [x;] and [y;]. The bottleneck is to evaluate (x;,y:), equivalent to the matrix multiplication XY'. The k-nearest neighbors for each of the nq queries are k-selected along each row of D. Compressed-domain search. From now on, we focus on approximate nearest-neighbor search. We consider, in par- ticular, the IVFADC indexing structure [25]. The IVFADC index relies on two levels of quantization, and the database vectors are encoded. The database vector y is approximated as: | 1702.08734#8 | 1702.08734#10 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#10 | Billion-scale similarity search with GPUs | y â q(y) = q1(y) + q2(y â q1(y)) (3) where q1 : Rd â C1 â Rd and q2 : Rd â C2 â Rd are quan- tizers; i.e., functions that output an element from a ï¬ nite set. Since the sets are ï¬ nite, q(y) is encoded as the index of q1(y) and that of q2(y â q1(y)). The ï¬ rst-level quantizer is a coarse quantizer and the second level ï¬ ne quantizer encodes the residual vector after the ï¬ rst level. | 1702.08734#9 | 1702.08734#11 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#11 | Billion-scale similarity search with GPUs | The Asymmetric Distance Computation (ADC) search method returns an approximate result: Lave = k-argmin,â o.¢||% â 4(y:)|l2- (4) For IVFADC the search is not exhaustive. Vectors for which the distance is computed are pre-selected depending on the ï¬ rst-level quantizer q1: (5) Live = T-argmin.ec, lle â The multi-probe parameter Ï is the number of coarse-level centroids we consider. The quantizer operates a nearest- neighbor search with exact distances, in the set of reproduc- tion values. Then, the IVFADC search computes k-argmin i=0:£ s.t. ai (yi)ELIVE Livrapc = llx â a(ya)|l2- (6) Hence, IVFADC relies on the same distance estimations as the two-step quantization of ADC, but computes them only on a subset of vectors. The corresponding data structure, the inverted ï¬ le, groups the vectors yi into |C1| inverted lists I1, ..., I|C1| with homo- geneous q1(yi). Therefore, the most memory-intensive op- eration is computing LIVFADC, and boils down to linearly scanning Ï inverted lists. The quantizers. The quantizers q: and q2 have different properties. qi needs to have a relatively low number of repro- duction values so that the number of inverted lists does not explode. We typically use |Ci| ~ V@, trained via k-means. For q2, we can afford to spend more memory for a more ex- tensive representation. The ID of the vector (a 4- or 8-byte integer) is also stored in the inverted lists, so it makes no sense to have shorter codes than that; , log, |C2| > 4x 8. Product quantizer. We use a product quantizer [25] for q2, which provides a large number of reproduction values with- out increasing the processing cost. It interprets the vector y as b sub-vectors y = [y0...ybâ 1], where b is an even divisor of the dimension d. Each sub-vector is quantized with its own quantizer, yielding the tuple (q0(y0), ..., qbâ 1(ybâ 1)). | 1702.08734#10 | 1702.08734#12 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#12 | Billion-scale similarity search with GPUs | The sub-quantizers typically have 256 reproduction values, to ï¬ t in one byte. The quantization value of the product quantizer is then q2(y) = q0(y0) + 256 à q1(y1) + ... + 256bâ 1 à qbâ 1, which from a storage point of view is just the concatena- tion of the bytes produced by each sub-quantizer. Thus, the product quantizer generates b-byte codes with |C2| = 256b reproduction values. The k-means dictionaries of the quan- tizers are small and quantization is computationally cheap. | 1702.08734#11 | 1702.08734#13 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#13 | Billion-scale similarity search with GPUs | 3. GPU: OVERVIEW AND K-SELECTION This section reviews salient details of Nvidiaâ s general- purpose GPU architecture and programming model [30]. We then focus on one of the less GPU-compliant parts involved in similarity search, namely the k-selection, and discuss the literature and challenges. # 3.1 Architecture GPU lanes and warps. The Nvidia GPU is a general- purpose computer that executes instruction streams using a 32-wide vector of CUDA threads (the warp); individual threads in the warp are referred to as lanes, with a lane ID from 0 â | 1702.08734#12 | 1702.08734#14 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#14 | Billion-scale similarity search with GPUs | 31. Despite the â threadâ terminology, the best analogy to modern vectorized multicore CPUs is that each warp is a separate CPU hardware thread, as the warp shares an instruction counter. Warp lanes taking diï¬ erent execu- tion paths results in warp divergence, reducing performance. Each lane has up to 255 32-bit registers in a shared register ï¬ le. The CPU analogy is that there are up to 255 vector registers of width 32, with warp lanes as SIMD vector lanes. Collections of warps. | 1702.08734#13 | 1702.08734#15 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#15 | Billion-scale similarity search with GPUs | A user-conï¬ gurable collection of 1 to 32 warps comprises a block or a co-operative thread ar- ray (CTA). Each block has a high speed shared memory, up to 48 KiB in size. Individual CUDA threads have a block- relative ID, called a thread id, which can be used to parti- tion and assign work. Each block is run on a single core of the GPU called a streaming multiprocessor (SM). Each SM has functional units, including ALUs, memory load/store units, and various special instruction units. A GPU hides execution latencies by having many operations in ï¬ ight on warps across all SMs. Each individual warp lane instruction throughput is low and latency is high, but the aggregate arithmetic throughput of all SMs together is 5 â 10à higher than typical CPUs. Grids and kernels. Blocks are organized in a grid of blocks in a kernel. Each block is assigned a grid relative ID. The kernel is the unit of work (instruction stream with argu- ments) scheduled by the host CPU for the GPU to execute. After a block runs through to completion, new blocks can be scheduled. Blocks from diï¬ erent kernels can run concur- rently. Ordering between kernels is controllable via ordering primitives such as streams and events. | 1702.08734#14 | 1702.08734#16 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#16 | Billion-scale similarity search with GPUs | Resources and occupancy. The number of blocks execut- ing concurrently depends upon shared memory and register resources used by each block. Per-CUDA thread register us- age is determined at compilation time, while shared memory usage can be chosen at runtime. This usage aï¬ ects occu- pancy on the GPU. If a block demands all 48 KiB of shared memory for its private usage, or 128 registers per thread as 3 opposed to 32, then only 1 â 2 other blocks can run concur- rently on the same SM, resulting in low occupancy. Under high occupancy more blocks will be present across all SMs, allowing more work to be in ï¬ | 1702.08734#15 | 1702.08734#17 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#17 | Billion-scale similarity search with GPUs | ight at once. Memory types. Diï¬ erent blocks and kernels communicate through global memory, typically 4 â 32 GB in size, with 5 â 10à higher bandwidth than CPU main memory. Shared memory is analogous to CPU L1 cache in terms of speed. GPU register ï¬ le memory is the highest bandwidth memory. In order to maintain the high number of instructions in ï¬ ight on a GPU, a vast register ï¬ le is also required: 14 MB in the latest Pascal P100, in contrast with a few tens of KB on CPU. A ratio of 250 : 6.25 : 1 for register to shared to global memory aggregate cross-sectional bandwidth is typical on GPU, yielding 10 â 100s of TB/s for the register ï¬ le [10]. # 3.2 GPU register ï¬ le usage Structured register data. Shared and register memory usage involves eï¬ | 1702.08734#16 | 1702.08734#18 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#18 | Billion-scale similarity search with GPUs | ciency tradeoï¬ s; they lower occupancy but can increase overall performance by retaining a larger work- ing set in a faster memory. Making heavy use of register- resident data at the expense of occupancy or instead of shared memory is often proï¬ table [43]. As the GPU register ï¬ le is very large, storing structured data (not just temporary operands) is useful. A single lane can use its (scalar) registers to solve a local task, but with limited parallelism and storage. Instead, lanes in a GPU warp can instead exchange register data using the warp shuf- ï¬ e instruction, enabling warp-wide parallelism and storage. Lane-stride register array. A common pattern to achieve this is a lane-stride register array. That is, given elements [ai]i=o:e, each successive value is held in a register by neigh- boring lanes. The array is stored in ¢/32 registers per lane, with £a multiple of 32. Lane j stores {a;, 4324), -.-, 43245}, while register r holds {a32;, @32r41, ---; @32r+31 }- For manipulating the [ai], the register in which a; is stored (i.e., [¢/32]) and @ must be known at assembly time, while the lane (i.e., i mod 32) can be runtime knowledge. A wide variety of access patterns (shift, any-to-any) are provided; we use the butterfly permutation extensively. # 3.3 k-selection on CPU versus GPU k-selection algorithms, often for arbitrarily large £ and k, can be translated to a GPU, including radiz_ selection and bucket selection (1], probabilistic selection [33], quick- , and truncated sorts |. Their performance is dominated by multiple passes over the input in global mem- ory. Sometimes for similarity search, the input distances are computed on-the-fly or stored only in small blocks, not in their entirety. The full, explicit array might be too large to fit into any memory, and its size could be unknown at the start of the processing, rendering algorithms that require multiple passes impractical. | 1702.08734#17 | 1702.08734#19 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#19 | Billion-scale similarity search with GPUs | They suffer from other issues as well. Quickselect requires partitioning on a storage of size O(â ¬), a data-dependent memory movement. This can result in excessive memory transactions, or requiring parallel prefix sums to determine write offsets, with synchronization overhead. Radix selection has no partitioning but multiple passes are still required. Heap parallelism. In similarity search applications, one is usually interested only in a small number of results, k < 1000 or so. In this regime, selection via max-heap is a typi- cal choice on the CPU, but heaps do not expose much data parallelism (due to serial tree update) and cannot saturate SIMD execution units. The ad-heap [31] takes better advan- tage of parallelism available in heterogeneous systems, but still attempts to partition serial and parallel work between appropriate execution units. Despite the serial nature of heap update, for small k the CPU can maintain all of its state in the L1 cache with little eï¬ ort, and L1 cache latency and bandwidth remains a limiting factor. Other similarity search components, like PQ code manipulation, tend to have greater impact on CPU performance [2]. GPU heaps. Heaps can be similarly implemented on a GPU [7]. However, a straightforward GPU heap implemen- tation suï¬ ers from high warp divergence and irregular, data- dependent memory movement, since the path taken for each inserted element depends upon other values in the heap. GPU parallel priority queues [24] improve over the serial heap update by allowing multiple concurrent updates, but they require a potential number of small sorts for each insert and data-dependent memory movement. Moreover, it uses multiple synchronization barriers through kernel launches in diï¬ erent streams, plus the additional latency of successive kernel launches and coordination with the CPU host. Other more novel GPU algorithms are available for small k, namely the selection algorithm in the fgknn library [41]. This is a complex algorithm that may suï¬ er from too many synchronization points, greater kernel launch overhead, us- age of slower memories, excessive use of hierarchy, partition- ing and buï¬ ering. However, we take inspiration from this particular algorithm through the use of parallel merges as seen in their merge queue structure. # 4. FAST K-SELECTION ON THE GPU For any CPU or GPU algorithm, either memory or arith- metic throughput should be the limiting factor as per the rooï¬ | 1702.08734#18 | 1702.08734#20 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#20 | Billion-scale similarity search with GPUs | ine performance model [48]. For input from global mem- ory, k-selection cannot run faster than the time required to scan the input once at peak memory bandwidth. We aim to get as close to this limit as possible. Thus, we wish to per- form a single pass over the input data (from global memory or produced on-the-ï¬ y, perhaps fused with a kernel that is generating the data). We want to keep intermediate state in the fastest memory: the register ï¬ | 1702.08734#19 | 1702.08734#21 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#21 | Billion-scale similarity search with GPUs | le. The major disadvantage of register memory is that the indexing into the register ï¬ le must be known at assembly time, which is a strong constraint on the algorithm. # In-register sorting We use an in-register sorting primitive as a building block. Sorting networks are commonly used on SIMD architec- tures [13], as they exploit vector parallelism. They are eas- ily implemented on the GPU, and we build sorting networks with lane-stride register arrays. We use a variant of Batcherâ s bitonic sorting network sl. which is a set of parallel merges on an array of size 2". Each merge takes s arrays of length t (s and t a power of 2) to s/2 arrays of length 2¢, using log,(t) parallel steps. A bitonic sort applies this merge recursively: to sort an array of length é, merge @ arrays of length 1 to ¢/2 arrays of length 2, to £/4 arrays of length 4, successively to 1 sorted array of length @, leading to $(log,(¢)? + log,(¢)) parallel merge steps. | 1702.08734#20 | 1702.08734#22 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#22 | Billion-scale similarity search with GPUs | 4 Algorithm 1 Odd-size merging network function MERGE-ODD((Li]i=0:¢, , [Ri]i=o:ep ) parallel for i + 0: min(éz, zg) do > inverted 1st stage; inputs are already sorted COMPARE-SWAP(L¢, ~iâ 1, Ri) end for parallel do > If £p = â ¬p and a power-of-2, these are equivalent MERGE-ODD-CONTINUE(([Li]i=0:¢,, left) MERGE-ODD-CONTINUE([Ri]i=o:¢,, right) end do end function function MERGE-ODD-CONTINUE(([2i]i=0:¢, P) if â ¬>1 then he Qileg2 1-1 > largest power-of-2 < ¢ parallel for i+ 0:â hdo > Implemented with warp shuffle butterfly COMPARE-SWAP(2i, Li+h) end for parallel do if p = left then > left side recursion MERGE-ODD-CONTINUE((2;]i=0:¢â h, Left) MERGE-ODD-CONTINUE(([;]i=¢â n:¢, Fight) else > right side recursion MERGE-ODD-CONTINUE(([2i]i=0:h, Left) MERGE-ODD-CONTINUE(([2i]i=n:¢, right) end if end do end if # end if end function Odd-size merging and sorting networks. If some input data is already sorted, we can modify the network to avoid merging steps. We may also not have a full power-of-2 set of data, in which case we can eï¬ ciently shortcut to deal with the smaller size. Algorithm 1 is an odd-sized merging network that merges already sorted left and right arrays, each of arbitrary length. While the bitonic network merges bitonic sequences, we start with monotonic sequences: sequences sorted monotonically. A bitonic merge is made monotonic by reversing the ï¬ rst comparator stage. The odd size algorithm is derived by considering arrays to be padded to the next highest power-of-2 size with dummy GBT4 o[3T7]. step 1 step 2 step 3 step 4 | 1702.08734#21 | 1702.08734#23 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#23 | Billion-scale similarity search with GPUs | Figure 1: Odd-size network merging arrays of sizes 5 and 3. Bullets indicate parallel compare/swap. Dashed lines are elided elements or comparisons. input thread queue warp queue ao : [esa â â >} T)0 «+e Teak Wo Waa lane 0 i insertion : : a Tvs T z lane 1 : fs... rd TLik> 2 Wy lane i 3 ; coalesced sk: read z : i ES : ; bag [Tp TE] War We-1) lane 31 fac. Figure 2: Overview of WarpSelect. The input val- ues stream in on the left, and the warp queue on the right holds the output result. elements that are never swapped (the merge is monotonic) and are already properly positioned; any comparisons with dummy elements are elided. A left array is considered to be padded with dummy elements at the start; a right ar- ray has them at the end. A merge of two sorted arrays length £, and ép to a sorted array of ¢; + &r requires log, (max(¢z, £r))] +1 parallel steps. =0 ri parallel steps. The compare-swap is implemented using warp shuï¬ es on a lane-stride register array. Swaps with a stride a multiple of 32 occur directly within a lane as the lane holds both elements locally. Swaps of stride â ¤ 16 or a non-multiple of 32 occur with warp shuï¬ | 1702.08734#22 | 1702.08734#24 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#24 | Billion-scale similarity search with GPUs | es. In practice, used array lengths are multiples of 32 as they are held in lane-stride arrays. Algorithm 2 Odd-size sorting networ function SORT-ODD([z;i]i=0:¢) if £>1 then parallel do SORT-ODD((2iJi=0:|¢/2) ) SORT-ODD((2iJi=[¢/2]:0) end do MERGE-ODD( [ai] i=0:[¢/2); [@é]i=[e/2):0) end if end function Algorithm|2]extends the merge to a full sort. Assuming no structure present in the input data, 4(log.(¢)]? + [log.(⠬)]) parallel steps are required for sorting data of length ¢. # 4.2 WarpSelect Our k-selection implementation, WARPSELECT, maintains state entirely in registers, requires only a single pass over data and avoids cross-warp synchronization. It uses MERGE- ODD and SORT-ODD as primitives. Since the register file pro- vides much more storage than shared memory, it supports k < 1024. Each warp is dedicated to k-selection to a single one of the n arrays [aj]. If n is large enough, a single warp per each [a;] will result in full GPU occupancy. Large £ per warp is handled by recursive decomposition, if £ is known in advance. | 1702.08734#23 | 1702.08734#25 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#25 | Billion-scale similarity search with GPUs | Overview. Our approach (Algorithm B]and Figure[2) oper- ates on values, with associated indices carried along (omit- ted from the description for simplicity). It selects the k least values that come from global memory, or from intermediate value registers if fused into another kernel providing the val- ues. Let [ai]i=o:¢ be the sequence provided for selection. 5 The elements (on the left of Figure 2) are processed in groups of 32, the warp size. Lane j is responsible for pro- cessing {aj, a32+j, ...}; thus, if the elements come from global memory, the reads are contiguous and coalesced into a min- imal number of memory transactions. Data structures. Each lane j maintains a small queue of t elements in registers, called the thread queues [T j i ]i=0:t, ordered from largest to smallest (T j i+1). The choice of t is made relative to k, see Section 4.3. The thread queue is a ï¬ rst-level ï¬ lter for new values coming in. If a new a32i+j is greater than the largest key currently in the queue, T j 0 , it is guaranteed that it wonâ t be in the k smallest ï¬ | 1702.08734#24 | 1702.08734#26 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#26 | Billion-scale similarity search with GPUs | nal results. The warp shares a lane-stride register array of k smallest seen elements, [Wi]i=0:k, called the warp queue. It is ordered from smallest to largest (Wi â ¤ Wi+1); if the requested k is not a multiple of 32, we round it up. This is a second level data structure that will be used to maintain all of the k smallest warp-wide seen values. The thread and warp queues are initialized to maximum sentinel values, e.g., +â | 1702.08734#25 | 1702.08734#27 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#27 | Billion-scale similarity search with GPUs | . Update. The three invariants maintained are: â ¢ all per-lane T j 0 are not in the min-k â ¢ all per-lane T j 0 are greater than all warp queue keys Wi â ¢ all ai seen so far in the min-k are contained in either i ]i=0:t,j=0:32), or in the some laneâ s thread queue ([T j warp queue. Lane j receives a new a32i+j and attempts to insert it into 0 , then the new pair is by its thread queue. If a32i+j > T j deï¬ nition not in the k minimum, and can be rejected. Otherwise, it is inserted into its proper sorted position in the thread queue, thus ejecting the old T j 0 . All lanes complete doing this with their new received pair and their thread queue, but it is now possible that the second invariant have been violated. Using the warp ballot instruction, we determine if any lane has violated the second invariant. If not, we are free to continue processing new elements. Restoring the invariants. If any lane has its invariant violated, then the warp uses odd-merge to merge and sort the thread and warp queues together. The new warp queue Algorithm 3 WARPSELECT pseudocode for lane j function WARPSELECT(a) if a< Tj then insert a into our [T?i=o:¢ end if if WARP-BALLOT(T) < W,-1) then > Reinterpret thread queues as lane-stride array [ai]io:32¢ - cast ([T? ]i=0:t,j)=0:32) > concatenate and sort thread queues SORT-ODD([aii]i=0:32¢) MERGE-ODD([W,]i=0:k; [@i]i=0:32¢) > Reinterpret lane-stride array as thread queues [T?]i=0:t,j=0:32 - CAST ([ai]i=0:32¢) REVERSE-ARRAY ([T;]i=0:) > Back in thread queue order, invariant restored end if end function will be the min-k elements across the merged, sorted queues, and the new thread queues will be the remainder, from min- (k + 1) to min-(k + 32t + 1). | 1702.08734#26 | 1702.08734#28 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#28 | Billion-scale similarity search with GPUs | This restores the invariants and we are free to continue processing subsequent elements. Since the thread and warp queues are already sorted, we merge the sorted warp queue of length k with 32 sorted arrays of length t. Supporting odd-sized merges is important because Batcherâ s formulation would require that 32t = k and is a power-of-2; thus if k = 1024, t must be 32. We found that the optimal t is way smaller (see below). Using odd-merge to merge the 32 already sorted thread queues would require a struct-of-arrays to array-of-structs transposition in registers across the warp, since the t succes- sive sorted values are held in diï¬ erent registers in the same lane rather than a lane-stride array. This is possible [12], but would use a comparable number of warp shuï¬ es, so we just reinterpret the thread queue registers as an (unsorted) lane-stride array and sort from scratch. | 1702.08734#27 | 1702.08734#29 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#29 | Billion-scale similarity search with GPUs | Signiï¬ cant speedup is realizable by using odd-merge for the merge of the ag- gregate sorted thread queues with the warp queue. Handling the remainder. If there are remainder elements ecause @ is not a multiple of 32, those are inserted into the thread queues for the lanes that have them, after which we proceed to the output stage. Output. A ï¬ nal sort and merge is made of the thread and warp queues, after which the warp queue holds all min-k values. | 1702.08734#28 | 1702.08734#30 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#30 | Billion-scale similarity search with GPUs | # 4.3 Complexity and parameter selection For each incoming group of 32 elements, WarpSelect can perform 1, 2 or 3 constant-time operations, all happen- ing in warp-wide parallel time: 1. read 32 elements, compare to all thread queue heads T j 0 , cost C1, happens N1 times; 0 , perform insertion sort on those speciï¬ c thread queues, cost C2 = O(t), hap- pens N2 times; 0 < Wkâ 1, sort and merge queues, cost C3 = O(t log(32t)2 + k log(max(k, 32t))), happens N3 times. Thus, the total cost is NiC, + N2C2 + N3C3. Ny = ¢/32, and on random data drawn independently, N2 = O(k log(é)) and N3 = O(klog(é)/t), see the Appendix for a full deriva- tion. Hence, the trade-off is to balance a cost in N2C2 and one in N3C3. The practical choice for t given k and £ was made by experiment on a variety of k-NN data. For k < 32, we use t = 2, k < 128 uses t = 3, k < 256 uses t = 4, and k < 1024 uses t = 8, all irrespective of ¢. | 1702.08734#29 | 1702.08734#31 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#31 | Billion-scale similarity search with GPUs | # 5. COMPUTATION LAYOUT This section explains how IVFADC, one of the indexing methods originally built upon product quantization [25], is implemented eï¬ ciently. Details on distance computations and articulation with k-selection are the key to understand- ing why this method can outperform more recent GPU- compliant approximate nearest neighbor strategies [47]. # 5.1 Exact search We brieï¬ y come back to the exhaustive search method, often referred to as exact brute-force. | 1702.08734#30 | 1702.08734#32 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#32 | Billion-scale similarity search with GPUs | It is interesting on its 6 own for exact nearest neighbor search in small datasets. It is also a component of many indexes in the literature. In our case, we use it for the IVFADC coarse quantizer q1. As stated in Section the distance computation boils down to a matrix multiplication. We use optimized GEMM routines in the cuBLAS library to calculate the â 2(x;, yi) term for L2 distance, resulting in a partial distance matrix Dâ . To complete the distance calculation, we use a fused k-selection kernel that adds the ||y;||? term to each entry of the distance matrix and immediately submits the value to k-selection in registers. The ||2;||? term need not be taken into account before k-selection. Kernel fusion thus allows for only 2 passes (GEMM write, k-select read) over Dâ , com- pared to other implementations that may require 3 or more. Row-wise k-selection is likely not fusable with a well-tuned GEMM kernel, or would result in lower overall efficiency. | 1702.08734#31 | 1702.08734#33 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#33 | Billion-scale similarity search with GPUs | As Dâ does not fit in GPU memory for realistic problem sizes, the problem is tiled over the batch of queries, with tg < mq queries being run in a single tile. Each of the [ng/tg| tiles are independent problems, but we run two in parallel on different streams to better occupy the GPU, so the effec- tive memory requirement of D is O(2¢t,). The computation can similarly be tiled over ¢. For very large input coming from the CPU, we support buffering with pinned memory to overlap CPU to GPU copy with GPU compute. # IVFADC indexing PQ lookup tables. At its core, the IVFADC requires com- puting the distance from a vector to a set of product quanti- zation reproduction values. By developing Equation (6) for a database vector y, we obtain: | 1702.08734#32 | 1702.08734#34 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#34 | Billion-scale similarity search with GPUs | Iz â a(y)I3 = lz ay) -â e2y-a@))lz. If we decompose the residual vectors left after q1 as: yâ uy) = [yt---y?] and (8) a) ee (9) then the distance is rewritten as: \lx â a(y) 3 = lla? - D3 +. + [12 â a NII. (20) Each quantizer q1, ..., qb has 256 reproduction values, so when x and q1(y) are known all distances can be precom- puted and stored in tables T1, ..., Tb each of size 256 [25]. Computing the sum (10) consists of b look-ups and addi- tions. Comparing the cost to compute n distances: | 1702.08734#33 | 1702.08734#35 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#35 | Billion-scale similarity search with GPUs | â ¢ Explicit computation: n à d mutiply-adds; â ¢ With lookup tables: 256 à d multiply-adds and n à b lookup-adds. This is the key to the eï¬ ciency of the product quantizer. In our GPU implementation, b is any multiple of 4 up to 64. The codes are stored as sequential groups of b bytes per vector within lists. IVFADC lookup tables. When scanning over the ele- ments of the inverted list IL (where by deï¬ nition q1(y) is constant), the look-up table method can be applied, as the query x and q1(y) are known. Moreover, the computation of the tables T;...T, is fur- ther optimized [5]. The expression of ||aâ q(y)||3 in Equation can be decomposed as: ila2(..-)II2 + 2(qr(y), a2(..-)) + [le â aa (y)II2 -2 (x, a2(..)) « Se a a term 1 term 2 term 3 (11) (11) The objective is to minimize inner loop computations. The computations we can do in advance and store in lookup tables are as follows: | 1702.08734#34 | 1702.08734#36 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#36 | Billion-scale similarity search with GPUs | â ¢ Term 1 is independent of the query. It can be precom- puted from the quantizers, and stored in a table T of size |C1| à 256 à b; â ¢ Term 2 is the distance to q1â s reproduction value. It is thus a by-product of the ï¬ rst-level quantizer q1; â ¢ Term 3 can be computed independently of the inverted list. Its computation costs d à 256 multiply-adds. This decomposition is used to produce the lookup tables T1 . . . Tb used during the scan of the inverted list. For a single query, computing the Ï Ã b tables from scratch costs Ï Ã d à 256 multiply-adds, while this decomposition costs 256à d multiply-adds and Ï Ã bà | 1702.08734#35 | 1702.08734#37 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#37 | Billion-scale similarity search with GPUs | 256 additions. On the GPU, the memory usage of T can be prohibitive, so we enable the decomposition only when memory is a not a concern. # 5.3 GPU implementation Algorithm 4 summarizes the process as one would im- plement it on a CPU. The inverted lists are stored as two separate arrays, for PQ codes and associated IDs. IDs are resolved only if k-selection determines k-nearest member- ship. This lookup yields a few sparse memory reads in a large array, thus the IDs can optionally be stored on CPU for tiny performance cost. List scanning. A kernel is responsible for scanning the Ï closest inverted lists for each query, and calculating the per- vector pair distances using the lookup tables Ti. The Ti are stored in shared memory: up to nq Ã Ï Ã maxi |Ii|à | 1702.08734#36 | 1702.08734#38 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#38 | Billion-scale similarity search with GPUs | b lookups are required for a query set (trillions of accesses in practice), and are random access. This limits b to at most 48 (32- bit ï¬ oating point) or 96 (16-bit ï¬ oating point) with current architectures. In case we do not use the decomposition of Equation (11), the Ti are calculated by a separate kernel before scanning. Multi-pass kernels. Each nq Ã Ï pairs of query against inverted list can be processed independently. At one ex- treme, a block is dedicated to each of these, resulting in up to nq à | 1702.08734#37 | 1702.08734#39 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#39 | Billion-scale similarity search with GPUs | Ï Ã maxi |Ii| partial results being written back to global memory, which is then k-selected to nq à k ï¬ nal re- sults. This yields high parallelism but can exceed available GPU global memory; as with exact search, we choose a tile size tq â ¤ nq to reduce memory consumption, bounding its complexity by O(2tqÏ maxi |Ii|) with multi-streaming. A single warp could be dedicated to k-selection of each tq set of lists, which could result in low parallelism. We introduce a two-pass k-selection, reducing tq à | 1702.08734#38 | 1702.08734#40 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#40 | Billion-scale similarity search with GPUs | Ï Ã maxi |Ii| to tq à f à k partial results for some subdivision factor f . This is reduced again via k-selection to the ï¬ nal tqà k results. Fused kernel. As with exact search, we experimented with a kernel that dedicates a single block to scanning all Ï lists 7 for a single query, with k-selection fused with distance com- putation. This is possible as WarpSelect does not ï¬ ght for the shared memory resource which is severely limited. This reduces global memory write-back, since almost all interme- diate results can be eliminated. However, unlike k-selection overhead for exact computation, a signiï¬ cant portion of the runtime is the gather from the Ti in shared memory and lin- ear scanning of the Ii from global memory; the write-back is not a dominant contributor. Timing for the fused kernel is improved by at most 15%, and for some problem sizes would be subject to lower parallelism and worse performance with- out subsequent decomposition. Therefore, and for reasons of implementation simplicity, we do not use this layout. Algorithm 4 IVFPQ batch search routine function IVFPQ-SEARCH((21, ..., Lng]; Ti, Zye,)) for i + 0: nq do > batch quantization of Section[5 Live + T-argmin,¢¢, lle â ¢ end for for i+ 0: nq do Led Compute term 3 (see Sectio: for L in Liyp do Compute distance tables 7}, ...,T for j in Z;, do > distance estimation, Equation d& jai â q(ys)I|3 Append (d, L,j) to L end for end for R; < k-select smallest distances d from L end for return R end function 2 > distance table > T loops | 1702.08734#39 | 1702.08734#41 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#41 | Billion-scale similarity search with GPUs | # 5.4 Multi-GPU parallelism Modern servers can support several GPUs. We employ this capability for both compute power and memory. Replication. If an index instance ï¬ ts in the memory of a single GPU, it can be replicated across R diï¬ erent GPUs. To query nq vectors, each replica handles a fraction nq/R of the queries, joining the results back together on a single GPU or in CPU memory. Replication has near linear speedup, except for a potential loss in eï¬ ciency for small nq. Sharding. If an index instance does not fit in the memory of a single GPU, an index can be sharded across S differ- ent GPUs. For adding ¢ vectors, each shard receives ¢/S of the vectors, and for query, each shard handles the full query set Nq, joining the partial results (an additional round of k- selection is still required) on a single GPU or in CPU mem- ory. For a given index size ¢, sharding will yield a speedup (sharding has a query of ng against ¢/S versus replication with a query of ng/R against @), but is usually less than pure replication due to fixed overhead and cost of subse- quent k-selection. Replication and sharding can be used together (S shards, each with R replicas for S à R GPUs in total). Sharding or replication are both fairly trivial, and the same principle can be used to distribute an index across multiple machines. 100 F ° runtime (ms) truncated bitonic sort fgknn select â | 1702.08734#40 | 1702.08734#42 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#42 | Billion-scale similarity search with GPUs | â WarpSelect â *â memory bandwidth limit â â 0.1 1024 4096 16384 65536 array length Figure 3: Runtimes for different k-selection meth- ods, as a function of array length ¢. Simultaneous arrays processed are n, = 10000. k = 100 for full lines, k = 1000 for dashed lines. # 6. EXPERIMENTS & APPLICATIONS This section compares our GPU k-selection and nearest- neighbor approach to existing libraries. Unless stated other- wise, experiments are carried out on a 2à 2.8GHz Intel Xeon E5-2680v2 with 4 Maxwell Titan X GPUs on CUDA 8.0. # 6.1 k-selection performance We compare against two other GPU small k-selection im- plementations: the row-based Merge Queue with Buï¬ ered Search and Hierarchical Partition extracted from the fgknn library of Tang et al. [41] and Truncated Bitonic Sort (TBiS ) from Sismanis et al. [40]. Both were extracted from their re- spective exact search libraries. We evaluate k-selection for k = 100 and 1000 of each row from a row-major matrix ng x ¢ of random 32-bit floating point values on a single Titan X. The batch size ng is fixed at 10000, and the array lengths ¢ vary from 1000 to 128000. Inputs and outputs to the problem remain resident in GPU memory, with the output being of size ng x k, with corre- sponding indices. Thus, the input problem sizes range from 40 MB (£= 1000) to 5.12 GB (¢= 128k). TBiS requires large auxiliary storage, and is limited to @ < 48000 in our tests. Figure[3]shows our relative performance against TBiS and fgknn. It also includes the peak possible performance given by the memory bandwidth limit of the Titan X. The rela- tive performance of WARPSELECT over fgknn increases for larger k; even TBiS starts to outperform fgknn for larger ¢ at k = 1000. We look especially at the largest ¢ = 128000. | 1702.08734#41 | 1702.08734#43 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#43 | Billion-scale similarity search with GPUs | WARPSELECT is 1.62x faster at k = 100, 2.01x at k = 1000. Performance against peak possible drops off for all imple- mentations at larger k. WARPSELECT operates at 55% of peak at k = 100 but only 16% of peak at k = 1000. This is due to additional overhead assocated with bigger thread queues and merge/sort networks for large k. Diï¬ erences from fgknn. WarpSelect is inï¬ uenced by fgknn, but has several improvements: all state is maintained in registers (no shared memory), no inter-warp synchroniza- tion or buï¬ ering is used, no â hierarchical partitionâ , the k- selection can be fused into other kernels, and it uses odd-size networks for eï¬ cient merging and sorting. | 1702.08734#42 | 1702.08734#44 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#44 | Billion-scale similarity search with GPUs | 8 method BIDMach [11] Ours Ours # GPUs 1 1 4 # centroids 4096 256 735 s 320 s 316 s 140 s 100 s 84 s Table 1: MNIST8m k-means performance # 6.2 k-means clustering The exact search method with k = 1 can be used by a k- means clustering method in the assignment stage, to assign nq training vectors to |C1| centroids. Despite the fact that it does not use the IVFADC and k = 1 selection is trivial (a parallel reduction is used for the k = 1 case, not WarpSe- lect), k-means is a good benchmark for the clustering used to train the quantizer q1. We apply the algorithm on MNIST8m images. The 8.1M images are graylevel digits in 28x28 pixels, linearized to vec- tors of 784-d. We compare this k-means implementation to the GPU k-means of BIDMach [11], which was shown to be more eï¬ cient than several distributed k-means implemen- tations that require dozens of machines3. Both algorithms were run for 20 iterations. Table 1 shows that our imple- mentation is more than 2à faster, although both are built upon cuBLAS. Our implementation receives some beneï¬ t from the k-selection fusion into L2 distance computation. For multi-GPU execution via replicas, the speedup is close to linear for large enough problems (3.16à for 4 GPUs with 4096 centroids). Note that this benchmark is somewhat un- realistic, as one would typically sub-sample the dataset ran- domly when so few centroids are requested. Large scale. We can also compare to [3], an approximate CPU method that clusters 108 128-d vectors to 85k cen- troids. Their clustering method runs in 46 minutes, but re- quires 56 minutes (at least) of pre-processing to encode the vectors. Our method performs exact k-means on 4 GPUs in 52 minutes without any pre-processing. # 6.3 Exact nearest neighbor search We consider a classical dataset used to evaluate nearest neighbor search: | 1702.08734#43 | 1702.08734#45 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#45 | Billion-scale similarity search with GPUs | SirT1M G5. Its characteristic sizes are £= 10°, d= 128, nq = 10. Computing the partial distance matrix Dâ costs ng x £ x d = 1.28 Tflop, which runs in less than one second on current GPUs. Figure|4]shows the cost of the distance computations against the cost of our tiling of the GEMM for the â 2 (x;,y:) term of Equation |2| and the peak possible k-selection performance on the distance matrix of size nq x ¢, which additionally accounts for reading the tiled result matrix Dâ at peak memory bandwidth. In addition to our method from Section 5, we include times from the two GPU libraries evaluated for k-selection performance in Section 6.1. We make several observations: | 1702.08734#44 | 1702.08734#46 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#46 | Billion-scale similarity search with GPUs | â ¢ for k-selection, the naive algorithm that sorts the full result array for each query using thrust::sort_by_key is more than 10Ã slower than the comparison methods; â ¢ L2 distance and k-selection cost is dominant for all but our method, which has 85 % of the peak possible performance, assuming GEMM usage and our tiling 3BIDMach numbers from https://github.com/BIDData/ BIDMach/wiki/Benchmarks#KMeans -2xy SGEMM (as tiled) â | 1702.08734#45 | 1702.08734#47 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#47 | Billion-scale similarity search with GPUs | â peak possible k-select our method â â 3 truncated bitonic sort â =â _ fgknn â B@ 25+ @ E 2h a â - â ¬ 2 15 256 1024 Figure 4: Exact search k-NN time for the SIFT1M dataset with varying k on 1 Titan X GPU. of the partial distance matrix Dâ on top of GEMM is close to optimal. The cuBLAS GEMM itself has low efficiency for small reduction sizes (d = 128); e Our fused L2/k-selection kernel is important. Our same exact algorithm without fusion (requiring an ad- ditional pass through Dâ ) is at least 25% slower. Eï¬ cient k-selection is even more important in situations where approximate methods are used to compute distances, because the relative cost of k-selection with respect to dis- tance computation increases. # 6.4 Billion-scale approximate search There are few studies on GPU-based approximate nearest- neighbor search on large datasets (¢ >> 10°). We report a few comparison points here on index search, using standard datasets and evaluation protocol in this field. SIFT1M. For the sake of completeness, we ï¬ rst compare our GPU search speed on Sift1M with the implementation of Wieschollek et al. [47]. They obtain a nearest neighbor re- call at 1 (fraction of queries where the true nearest neighbor is in the top 1 result) of R@1 = 0.51, and R@100 = 0.86 in 0.02 ms per query on a Titan X. For the same time budget, our implementation obtains R@1 = 0.80 and R@100 = 0.95. SIFT1B. We compare again with Wieschollek et al., on the Sift1B dataset [26] of 1 billion SIFT image features at nq = 104. We compare the search performance in terms of same memory usage for similar accuracy (more accurate methods may involve greater search time or memory usage). On a single GPU, with m = 8 bytes per vector, R@10 = 0.376 in 17.7 µs per query vector, versus their reported R@10 = 0.35 in 150 µs per query vector. | 1702.08734#46 | 1702.08734#48 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#48 | Billion-scale similarity search with GPUs | Thus, our implementation is more accurate at a speed 8.5à faster. DEEP 1B. We also experimented on the DEEP1B dataset of â ¬=1 billion CNN representations for images at nq = 10°. The paper that introduces the dataset reports CPU results (1 thread): R@1=0.45 in 20 ms search time per vector. We use a PQ encoding of m = 20, with d = 80 via OPQ {17}, and |C:| = 2'*, which uses a comparable dataset storage as the original paper (20 GB). This requires multiple GPUs as it is too large for a single GPUâ s global memory, so we con- sider 4 GPUs with S = 2, R = 2. We obtain a R@1 =0.4517 in 0.0133 ms per vector. While the hardware platforms are | 1702.08734#47 | 1702.08734#49 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#49 | Billion-scale similarity search with GPUs | 9 120 o_o 4 Titan X: m=64, S=1, R=4. â +â © 100 + 4 Titan X: m=32, S=1, =~ 4 â 4 Titan X: m=16, S=1, R=4. â «â 2 sof | 3 2B 607 4 ra & © 40F 4 Da Zz = 20F 4 YFCC100M 0 1 1 1 A i i 1 O01 02 03 04 O05 06 O7 O08 09 10-intersection at 10 24 T T T T 7 7 4 Titan X: m=40, S=4, R=1 â +â i 20, S=2, 8 M40: m=40, S=4, 8 M40: m=20, S=2, R k-NN graph build time (hours) ry T 4b 4 DEEP1B Ld 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10-intersection at 10 Figure 5: Speed/accuracy trade-oï¬ of brute-force 10-NN graph construction for the YFCC100M and DEEP1B datasets. | 1702.08734#48 | 1702.08734#50 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#50 | Billion-scale similarity search with GPUs | diï¬ erent, it shows that making searches on GPUs is a game- changer in terms of speed achievable on a single machine. # 6.5 The k-NN graph An example usage of our similarity search method is to construct a k-nearest neighbor graph of a dataset via brute force (all vectors queried against the entire index). Experimental setup. We evaluate the trade-oï¬ between speed, precision and memory on two datasets: 95 million images from the Yfcc100M dataset [42] and Deep1B. For Yfcc100M, we compute CNN descriptors as the one-before- last layer of a ResNet [23], reduced to d = 128 with PCA. | 1702.08734#49 | 1702.08734#51 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#51 | Billion-scale similarity search with GPUs | The evaluation measures the trade-oï¬ between: â ¢ Speed: How much time it takes to build the IVFADC index from scratch and construct the whole k-NN graph (k = 10) by searching nearest neighbors for all vectors in the dataset. Thus, this is an end-to-end test that includes indexing as well as search time; â ¢ Quality: We sample 10,000 images for which we com- pute the exact nearest neighbors. Our accuracy mea- sure is the fraction of 10 found nearest neighbors that are within the ground-truth 10 nearest neighbors. For Yfcc100M, we use a coarse quantizer (216 centroids), and consider m = 16, 32 and 64 byte PQ encodings for each vector. For Deep1B, we pre-process the vectors to d = 120 via OPQ, use |C1| = 218 and consider m = 20, 40. For a given encoding, we vary Ï from 1 to 256, to obtain trade- oï¬ s between eï¬ ciency and quality, as seen in Figure 5. Figure 6: Path in the k-NN graph of 95 million images from YFCC100M. The ï¬ rst and the last image are given; the algorithm computes the smoothest path between them. | 1702.08734#50 | 1702.08734#52 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#52 | Billion-scale similarity search with GPUs | Discussion. For Yfcc100M we used S = 1, R = 4. An accuracy of more than 0.8 is obtained in 35 minutes. For Deep1B, a lower-quality graph can be built in 6 hours, with higher quality in about half a day. We also experi- mented with more GPUs by doubling the replica set, us- ing 8 Maxwell M40s (the M40 is roughly equivalent in per- formance to the Titan X). Performance is improved sub- linearly (â ¼ 1.6Ã for m = 20, â ¼ 1.7Ã for m = 40). # 7. CONCLUSION | 1702.08734#51 | 1702.08734#53 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#53 | Billion-scale similarity search with GPUs | The arithmetic throughput and memory bandwidth of GPUs are well into the teraï¬ ops and hundreds of gigabytes per second. However, implementing algorithms that ap- proach these performance levels is complex and counter- intuitive. In this paper, we presented the algorithmic struc- ture of similarity search methods that achieves near-optimal performance on GPUs. For comparison, the largest k-NN graph construction we are aware of used a dataset comprising 36.5 million 384- d vectors, which took a cluster of 128 CPU servers 108.7 hours of compute [45], using NN-Descent [15]. Note that NN-Descent could also build or reï¬ ne the k-NN graph for the datasets we consider, but it has a large memory over- head over the graph storage, which is already 80 GB for Deep1B. Moreover it requires random access across all vec- tors (384 GB for Deep1B). The largest GPU k-NN graph construction we found is a brute-force construction using exact search with GEMM, of a dataset of 20 million 15,000-d vectors, which took a cluster of 32 Tesla C2050 GPUs 10 days [14]. Assuming computa- tion scales with GEMM cost for the distance matrix, this approach for Deep1B would take an impractical 200 days of computation time on their cluster. # 6.6 Using the k-NN graph When a k-NN graph has been constructed for an image dataset, we can ï¬ nd paths in the graph between any two images, provided there is a single connected component (this is the case). For example, we can search the shortest path between two images of ï¬ owers, by propagating neighbors from a starting image to a destination image. Denoting by S and D the source and destination images, and dij the distance between nodes, we search the path P = {p1, ..., pn} with p1 = S and pn = D such that This work enables applications that needed complex ap- proximate algorithms before. For example, the approaches presented here make it possible to do exact k-means cluster- ing or to compute the k-NN graph with simple brute-force approaches in less time than a CPU (or a cluster of them) would take to do this approximately. GPU hardware is now very common on scientiï¬ | 1702.08734#52 | 1702.08734#54 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#54 | Billion-scale similarity search with GPUs | c work- stations, due to their popularity for machine learning algo- rithms. We believe that our work further demonstrates their interest for database applications. Along with this work, we are publishing a carefully engineered implementation of this paperâ s algorithms, so that these GPUs can now also be used for eï¬ cient similarity search. 8. REFERENCES [1] T. Alabi, J. D. Blanchard, B. Gordon, and R. Steinbach. Fast k-selection algorithms for graphics processing units. ACM Journal of Experimental Algorithmics, 17:4.2:4.1â 4.2:4.29, October 2012. [2] F. Andr´e, A.-M. Kermarrec, and N. L. Scouarnec. | 1702.08734#53 | 1702.08734#55 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#55 | Billion-scale similarity search with GPUs | Cache locality is not enough: High-performance nearest neighbor search with product quantization fast scan. In Proc. International Conference on Very Large DataBases, pages 288â 299, 2015. [3] Y. Avrithis, Y. Kalantidis, E. Anagnostopoulos, and I. Z. Emiris. Web-scale image clustering revisited. In Proc. International Conference on Computer Vision, pages 1502â 1510, 2015. min P max i=1..n dpipi+1 , (12) [4] A. Babenko and V. Lempitsky. | 1702.08734#54 | 1702.08734#56 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#56 | Billion-scale similarity search with GPUs | The inverted multi-index. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 3069â 3076, June 2012. i.e., we want to favor smooth transitions. An example re- sult is shown in Figure 6 from Yfcc100M4. It was ob- tained after 20 seconds of propagation in a k-NN graph with k = 15 neighbors. Since there are many ï¬ ower images in the dataset, the transitions are smooth. [5] A. Babenko and V. Lempitsky. | 1702.08734#55 | 1702.08734#57 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#57 | Billion-scale similarity search with GPUs | Improving bilayer product quantization for billion-scale approximate nearest neighbors in high dimensions. arXiv preprint arXiv:1404.1831, 2014. [6] A. Babenko and V. Lempitsky. Eï¬ cient indexing of billion-scale datasets of deep descriptors. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 2055â 2063, June 2016. 4The mapping from vectors to images is not available for Deep1B [7] R. Barrientos, J. G´omez, C. Tenllado, M. Prieto, and M. | 1702.08734#56 | 1702.08734#58 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#58 | Billion-scale similarity search with GPUs | Marin. knn query processing in metric spaces using GPUs. In International European Conference on Parallel and Distributed Computing, volume 6852 of Lecture Notes 10 in Computer Science, pages 380â 392, Bordeaux, France, September 2011. Springer. [8] K. E. Batcher. Sorting networks and their applications. In Proc. Spring Joint Computer Conference, AFIPS â 68 (Spring), pages 307â 314, New York, NY, USA, 1968. | 1702.08734#57 | 1702.08734#59 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#59 | Billion-scale similarity search with GPUs | ACM. [9] P. Boncz, W. Lehner, and T. Neumann. Special issue: Modern hardware. The VLDB Journal, 25(5):623â 624, 2016. [10] J. Canny, D. L. W. Hall, and D. Klein. A multi-teraï¬ op constituency parser using GPUs. In Proc. Empirical Methods on Natural Language Processing, pages 1898â 1907. ACL, 2013. [11] J. Canny and H. Zhao. Bidmach: | 1702.08734#58 | 1702.08734#60 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#60 | Billion-scale similarity search with GPUs | Large-scale learning with zero memory allocation. In BigLearn workshop, NIPS, 2013. [12] B. Catanzaro, A. Keller, and M. Garland. A decomposition for in-place matrix transposition. In Proc. ACM Symposium on Principles and Practice of Parallel Programming, PPoPP â 14, pages 193â 206, 2014. [13] J. Chhugani, A. D. Nguyen, V. W. Lee, W. Macy, M. Hagog, Y.-K. Chen, A. Baransi, S. Kumar, and P. Dubey. | 1702.08734#59 | 1702.08734#61 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#61 | Billion-scale similarity search with GPUs | Eï¬ cient implementation of sorting on multi-core simd cpu architecture. Proc. VLDB Endow., 1(2):1313â 1324, August 2008. [14] A. Dashti. Eï¬ cient computation of k-nearest neighbor graphs for large high-dimensional data sets on gpu clusters. Masterâ s thesis, University of Wisconsin Milwaukee, August 2013. [15] W. Dong, M. Charikar, and K. Li. Eï¬ | 1702.08734#60 | 1702.08734#62 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#62 | Billion-scale similarity search with GPUs | cient k-nearest neighbor graph construction for generic similarity measures. In WWW: Proceeding of the International Conference on World Wide Web, pages 577â 586, March 2011. [16] M. Douze, H. J´egou, and F. Perronnin. Polysemous codes. In Proc. European Conference on Computer Vision, pages 785â 801. Springer, October 2016. [17] T. Ge, K. He, Q. Ke, and J. Sun. | 1702.08734#61 | 1702.08734#63 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#63 | Billion-scale similarity search with GPUs | Optimized product quantization. IEEE Trans. PAMI, 36(4):744â 755, 2014. [18] Y. Gong and S. Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 817â 824, June 2011. [19] Y. Gong, L. Wang, R. Guo, and S. | 1702.08734#62 | 1702.08734#64 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#64 | Billion-scale similarity search with GPUs | Lazebnik. Multi-scale orderless pooling of deep convolutional activation features. In Proc. European Conference on Computer Vision, pages 392â 407, 2014. [20] A. Gordo, J. Almazan, J. Revaud, and D. Larlus. Deep image retrieval: Learning global representations for image search. In Proc. European Conference on Computer Vision, pages 241â 257, 2016. [21] S. Han, H. Mao, and W. J. Dally. | 1702.08734#63 | 1702.08734#65 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#65 | Billion-scale similarity search with GPUs | Deep compression: Compressing deep neural networks with pruning, trained quantization and huï¬ man coding. arXiv preprint arXiv:1510.00149, 2015. [22] K. He, F. Wen, and J. Sun. K-means hashing: An aï¬ nity-preserving quantization method for learning binary compact codes. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 2938â 2945, June 2013. [23] K. He, X. Zhang, S. Ren, and J. Sun. | 1702.08734#64 | 1702.08734#66 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#66 | Billion-scale similarity search with GPUs | Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 770â 778, June 2016. [24] X. He, D. Agarwal, and S. K. Prasad. Design and implementation of a parallel priority queue on many-core architectures. IEEE International Conference on High Performance Computing, pages 1â 10, 2012. [25] H. J´egou, M. Douze, and C. Schmid. | 1702.08734#65 | 1702.08734#67 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#67 | Billion-scale similarity search with GPUs | Product quantization for nearest neighbor search. IEEE Trans. PAMI, 33(1):117â 128, January 2011. [26] H. J´egou, R. Tavenard, M. Douze, and L. Amsaleg. Searching in one billion vectors: re-rank with source coding. In International Conference on Acoustics, Speech, 11 and Signal Processing, pages 861â 864, May 2011. [27] Y. Kalantidis and Y. Avrithis. | 1702.08734#66 | 1702.08734#68 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#68 | Billion-scale similarity search with GPUs | Locally optimized product quantization for approximate nearest neighbor search. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 2329â 2336, June 2014. [28] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classiï¬ cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097â 1105, 2012. [29] F. T. Leighton. | 1702.08734#67 | 1702.08734#69 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#69 | Billion-scale similarity search with GPUs | Introduction to Parallel Algorithms and Architectures: Array, Trees, Hypercubes. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1992. [30] E. Lindholm, J. Nickolls, S. Oberman, and J. Montrym. NVIDIA Tesla: a uniï¬ ed graphics and computing architecture. IEEE Micro, 28(2):39â 55, March 2008. [31] W. Liu and B. | 1702.08734#68 | 1702.08734#70 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#70 | Billion-scale similarity search with GPUs | Vinter. Ad-heap: An eï¬ cient heap data structure for asymmetric multicore processors. In Proc. of Workshop on General Purpose Processing Using GPUs, pages 54:54â 54:63. ACM, 2014. [32] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111â 3119, 2013. [33] L. Monroe, J. Wendelberger, and S. Michalak. | 1702.08734#69 | 1702.08734#71 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#71 | Billion-scale similarity search with GPUs | Randomized selection on the GPU. In Proc. ACM Symposium on High Performance Graphics, pages 89â 98, 2011. [34] M. Norouzi and D. Fleet. Cartesian k-means. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 3017â 3024, June 2013. [35] M. Norouzi, A. Punjani, and D. J. Fleet. Fast search in Hamming space with multi-index hashing. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 3108â 3115, 2012. [36] J. Pan and D. Manocha. | 1702.08734#70 | 1702.08734#72 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#72 | Billion-scale similarity search with GPUs | Fast GPU-based locality sensitive hashing for k-nearest neighbor computation. In Proc. ACM International Conference on Advances in Geographic Information Systems, pages 211â 220, 2011. [37] L. Paulev´e, H. J´egou, and L. Amsaleg. Locality sensitive hashing: a comparison of hash function types and querying mechanisms. Pattern recognition letters, 31(11):1348â 1358, August 2010. [38] O. Shamir. | 1702.08734#71 | 1702.08734#73 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#73 | Billion-scale similarity search with GPUs | Fundamental limits of online and distributed algorithms for statistical learning and estimation. In Advances in Neural Information Processing Systems, pages 163â 171, 2014. [39] A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. CNN features oï¬ -the-shelf: an astounding baseline for recognition. In CVPR workshops, pages 512â 519, 2014. [40] N. Sismanis, N. Pitsianis, and X. | 1702.08734#72 | 1702.08734#74 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#74 | Billion-scale similarity search with GPUs | Sun. Parallel search of k-nearest neighbors with synchronous operations. In IEEE High Performance Extreme Computing Conference, pages 1â 6, 2012. [41] X. Tang, Z. Huang, D. M. Eyers, S. Mills, and M. Guo. Eï¬ cient selection algorithm for fast k-nn search on GPUs. In IEEE International Parallel & Distributed Processing Symposium, pages 397â 406, 2015. [42] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li. YFCC100M: | 1702.08734#73 | 1702.08734#75 | 1702.08734 | [
"1510.00149"
]
|
1702.08734#75 | Billion-scale similarity search with GPUs | The new data in multimedia research. Communications of the ACM, 59(2):64â 73, January 2016. [43] V. Volkov and J. W. Demmel. Benchmarking GPUs to tune dense linear algebra. In Proc. ACM/IEEE Conference on Supercomputing, pages 31:1â 31:11, 2008. [44] A. Wakatani and A. Murakami. GPGPU implementation of nearest neighbor search with product quantization. In IEEE International Symposium on Parallel and Distributed Processing with Applications, pages 248â 253, 2014. [45] T. Warashina, K. Aoyama, H. Sawada, and T. Hattori. | 1702.08734#74 | 1702.08734#76 | 1702.08734 | [
"1510.00149"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.