doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1702.08734 | 7 | The paper is organized as follows. Section 2 introduces the context and notation. Section 3 reviews GPU archi- tecture and discusses problems appearing when using it for similarity search. Section 4 introduces one of our main con- tributions, i.e., our k-selection method for GPUs, while Sec- tion 5 provides details regarding the algorithm computation layout. Finally, Section 6 provides extensive experiments for our approach, compares it to the state of the art, and shows concrete use cases for image collections.
# 2. PROBLEM STATEMENT
We are concerned with similarity search in vector collec- tions. Given the query vector x ⬠R? and the collectio: [yilisore (Yi ⬠Râ), we search:
L = k-argmin,_o.¢|/¢ â yi| 2, (1)
i.e., we search the k nearest neighbors of x in terms of L2 distance. The L2 distance is used most often, as it is op- timized by design when learning several embeddings (e.g., [20]), due to its attractive linear algebra properties. | 1702.08734#7 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 8 | 2
2016] and privacy [Toubiana et al., 2010, Dwork et al., 2012, Hardt and Talwar, 2010] the research communities have formalized their criteria, and these formalizations have allowed for a blossoming of rigorous research in these ï¬elds (without the need for interpretability). However, in many cases, formal deï¬nitions remain elusive. Following the psychology literature, where Keil et al. [2004] notes âexplanations may highlight an incompleteness,â we argue that interpretability can assist in qual- itatively ascertaining whether other desiderataâsuch as fairness, privacy, reliability, robustness, causality, usability and trustâare met. For example, one can provide a feasible explanation that fails to correspond to a causal structure, exposing a potential concern.
# 2 Why interpretability? Incompleteness
Not all ML systems require interpretability. Ad servers, postal code sorting, air craft collision avoidance systemsâall compute their output without human intervention. Explanation is not necessary either because (1) there are no signiï¬cant consequences for unacceptable results or (2) the problem is suï¬ciently well-studied and validated in real applications that we trust the systemâs decision, even if the system is not perfect. | 1702.08608#8 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 8 | The lowest distances are collected by k-selection. For an array [ai]i=o:c, k-selection finds the k lowest valued elements [as;Jiso:k, @s; < Gs;,,, along with the indices [s;J]i=0:%, 0 < 8; < 4, of those elements from the input array. The a; will be 32-bit floating point values; the s; are 32- or 64-bit integers. Other comparators are sometimes desired; e.g., for cosine similarity we search for highest values. The order between equivalent keys as; = @s,; is not specified.
Batching. Typically, searches are performed in batches of nq query vectors [17;]j=0:n, (ej ⬠R*) in parallel, which allows for more flexibility when executing on multiple CPU threads or on GPU. Batching for k-selection entails selecting Nq X k elements and indices from nq separate arrays, where each array is of a potentially different length ¢; > k.
?To avoid clutter in 0-based indexing, we use the array no- tation 0: £ to denote the range {0 â 1} inclusive.
2 | 1702.08734#8 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 9 | So when is explanation necessary and appropriate? We argue that the need for interpretability stems from an incompleteness in the problem formalization, creating a fundamental barrier to optimization and evaluation. Note that incompleteness is distinct from uncertainty: the fused estimate of a missile location may be uncertain, but such uncertainty can be rigorously quantiï¬ed and formally reasoned about. In machine learning terms, we distinguish between cases where unknowns result in quantiï¬ed varianceâe.g. trying to learn from small data set or with limited the eï¬ect of sensorsâand incompleteness that produces some kind of unquantiï¬ed biasâe.g. including domain knowledge in a model selection process. Below are some illustrative scenarios:
⢠Scientiï¬c Understanding: The humanâs goal is to gain knowledge. We do not have a complete way of stating what knowledge is; thus the best we can do is ask for explanations we can convert into knowledge.
⢠Safety: For complex tasks, the end-to-end system is almost never completely testable; one cannot create a complete list of scenarios in which the system may fail. Enumerating all possible outputs given all possible inputs be computationally or logistically infeasible, and we may be unable to ï¬ag all undesirable outputs. | 1702.08608#9 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 9 | ?To avoid clutter in 0-based indexing, we use the array no- tation 0: £ to denote the range {0 â 1} inclusive.
2
Exact search. The exact solution computes the full pair- wise distance matrix D = [||xj â Yill3]j=0:ng,i=020 ⬠RX! In practice, we use the decomposition
Ilxj â yell = |lxall? + llyill? â 2(a3,.m). (2)
The two first terms can be precomputed in one pass over the matrices X and Y whose rows are the [x;] and [y;]. The bottleneck is to evaluate (x;,y:), equivalent to the matrix multiplication XY'. The k-nearest neighbors for each of the nq queries are k-selected along each row of D.
Compressed-domain search. From now on, we focus on approximate nearest-neighbor search. We consider, in par- ticular, the IVFADC indexing structure [25]. The IVFADC index relies on two levels of quantization, and the database vectors are encoded. The database vector y is approximated as: | 1702.08734#9 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 10 | ⢠Ethics: The human may want to guard against certain kinds of discrimination, and their notion of fairness may be too abstract to be completely encoded into the system (e.g., one might desire a âfairâ classiï¬er for loan approval). Even if we can encode protections for speciï¬c protected classes into the system, there might be biases that we did not consider a priori (e.g., one may not build gender-biased word embeddings on purpose, but it was a pattern in data that became apparent only after the fact).
⢠Mismatched objectives: The agentâs algorithm may be optimizing an incomplete objectiveâ that is, a proxy function for the ultimate goal. For example, a clinical system may be opti- mized for cholesterol control, without considering the likelihood of adherence; an automotive engineer may be interested in engine data not to make predictions about engine failures but to more broadly build a better car.
3 | 1702.08608#10 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 10 | y â q(y) = q1(y) + q2(y â q1(y)) (3) where q1 : Rd â C1 â Rd and q2 : Rd â C2 â Rd are quan- tizers; i.e., functions that output an element from a ï¬nite set. Since the sets are ï¬nite, q(y) is encoded as the index of q1(y) and that of q2(y â q1(y)). The ï¬rst-level quantizer is a coarse quantizer and the second level ï¬ne quantizer encodes the residual vector after the ï¬rst level.
The Asymmetric Distance Computation (ADC) search method returns an approximate result:
Lave = k-argmin,âo.¢||% â 4(y:)|l2- (4)
For IVFADC the search is not exhaustive. Vectors for which the distance is computed are pre-selected depending on the ï¬rst-level quantizer q1:
(5) Live = T-argmin.ec, lle â | 1702.08734#10 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 11 | 3
⢠Multi-objective trade-oï¬s: Two well-deï¬ned desiderata in ML systems may compete with each other, such as privacy and prediction quality [Hardt et al., 2016] or privacy and non- discrimination [Strahilevitz, 2008]. Even if each objectives are fully-speciï¬ed, the exact dy- namics of the trade-oï¬ may not be fully known, and the decision may have to be case-by-case.
In the presence of an incompleteness, explanations are one of ways to ensure that eï¬ects of gaps in problem formalization are visible to us.
# 3 How? A Taxonomy of Interpretability Evaluation
Even in standard ML settings, there exists a taxonomy of evaluation that is considered appropriate. In particular, the evaluation should match the claimed contribution. Evaluation of applied work should demonstrate success in the application: a game-playing agent might best a human player, a classiï¬er may correctly identify star types relevant to astronomers. In contrast, core methods work should demonstrate generalizability via careful evaluation on a variety of synthetic and standard benchmarks. | 1702.08608#11 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 11 | (5) Live = T-argmin.ec, lle â
The multi-probe parameter Ï is the number of coarse-level centroids we consider. The quantizer operates a nearest- neighbor search with exact distances, in the set of reproduc- tion values. Then, the IVFADC search computes
k-argmin i=0:£ s.t. ai (yi)ELIVE Livrapc = llx â a(ya)|l2- (6)
Hence, IVFADC relies on the same distance estimations as the two-step quantization of ADC, but computes them only on a subset of vectors.
The corresponding data structure, the inverted ï¬le, groups the vectors yi into |C1| inverted lists I1, ..., I|C1| with homo- geneous q1(yi). Therefore, the most memory-intensive op- eration is computing LIVFADC, and boils down to linearly scanning Ï inverted lists. | 1702.08734#11 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 12 | In this section we lay out an analogous taxonomy of evaluation approaches for interpretabil- ity: application-grounded, human-grounded, and functionally-grounded. These range from task- relevant to general, also acknowledge that while human evaluation is essential to assessing in- terpretability, human-subject evaluation is not an easy task. A human experiment needs to be well-designed to minimize confounding factors, consumed time, and other resources. We discuss the trade-oï¬s between each type of evaluation and when each would be appropriate.
# 3.1 Application-grounded Evaluation: Real humans, real tasks | 1702.08608#12 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 12 | The quantizers. The quantizers q: and q2 have different properties. qi needs to have a relatively low number of repro- duction values so that the number of inverted lists does not explode. We typically use |Ci| ~ V@, trained via k-means. For q2, we can afford to spend more memory for a more ex- tensive representation. The ID of the vector (a 4- or 8-byte integer) is also stored in the inverted lists, so it makes no sense to have shorter codes than that; , log, |C2| > 4x 8.
Product quantizer. We use a product quantizer [25] for q2, which provides a large number of reproduction values with- out increasing the processing cost. It interprets the vector y as b sub-vectors y = [y0...ybâ1], where b is an even divisor of | 1702.08734#12 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 13 | # 3.1 Application-grounded Evaluation: Real humans, real tasks
Application-grounded evaluation involves conducting human experiments within a real application. If the researcher has a concrete application in mindâsuch as working with doctors on diagnosing patients with a particular diseaseâthe best way to show that the model works is to evaluate it with respect to the task: doctors performing diagnoses. This reasoning aligns with the methods of evaluation common in the human-computer interaction and visualization communities, where there exists a strong ethos around making sure that the system delivers on its intended task [Antunes et al., 2012, Lazar et al., 2010]. For example, a visualization for correcting segmentations from microscopy data would be evaluated via user studies on segmentation on the target image task [Suissa-Peleg et al., 2016]; a homework-hint system is evaluated on whether the student achieves better post-test performance [Williams et al., 2016].
Speciï¬cally, we evaluate the quality of an explanation in the context of its end-task, such as whether it results in better identiï¬cation of errors, new facts, or less discrimination. Examples of experiments include:
⢠Domain expert experiment with the exact application task.
⢠Domain expert experiment with a simpler or partial task to shorten experiment time and increase the pool of potentially-willing subjects. | 1702.08608#13 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 13 | the dimension d. Each sub-vector is quantized with its own quantizer, yielding the tuple (q0(y0), ..., qbâ1(ybâ1)). The sub-quantizers typically have 256 reproduction values, to ï¬t in one byte. The quantization value of the product quantizer is then q2(y) = q0(y0) + 256 à q1(y1) + ... + 256bâ1 à qbâ1, which from a storage point of view is just the concatena- tion of the bytes produced by each sub-quantizer. Thus, the product quantizer generates b-byte codes with |C2| = 256b reproduction values. The k-means dictionaries of the quan- tizers are small and quantization is computationally cheap.
3. GPU: OVERVIEW AND K-SELECTION This section reviews salient details of Nvidiaâs general- purpose GPU architecture and programming model [30]. We then focus on one of the less GPU-compliant parts involved in similarity search, namely the k-selection, and discuss the literature and challenges.
# 3.1 Architecture | 1702.08734#13 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 14 | ⢠Domain expert experiment with the exact application task.
⢠Domain expert experiment with a simpler or partial task to shorten experiment time and increase the pool of potentially-willing subjects.
In both cases, an important baseline is how well human-produced explanations assist in other humans trying to complete the task. To make high impact in real world applications, it is essential that we as a community respect the time and eï¬ort involved to do such evaluations, and also demand
4
high standards of experimental design when such evaluations are performed. As HCI community recognizes [Antunes et al., 2012], this is not an easy evaluation metric. Nonetheless, it directly tests the objective that the system is built for, and thus performance with respect to that objective gives strong evidence of success.
# 3.2 Human-grounded Metrics: Real humans, simpliï¬ed tasks | 1702.08608#14 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 14 | # 3.1 Architecture
GPU lanes and warps. The Nvidia GPU is a general- purpose computer that executes instruction streams using a 32-wide vector of CUDA threads (the warp); individual threads in the warp are referred to as lanes, with a lane ID from 0 â 31. Despite the âthreadâ terminology, the best analogy to modern vectorized multicore CPUs is that each warp is a separate CPU hardware thread, as the warp shares an instruction counter. Warp lanes taking diï¬erent execu- tion paths results in warp divergence, reducing performance. Each lane has up to 255 32-bit registers in a shared register ï¬le. The CPU analogy is that there are up to 255 vector registers of width 32, with warp lanes as SIMD vector lanes. | 1702.08734#14 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 15 | # 3.2 Human-grounded Metrics: Real humans, simpliï¬ed tasks
Human-grounded evaluation is about conducting simpler human-subject experiments that maintain the essence of the target application. Such an evaluation is appealing when experiments with the target community is challenging. These evaluations can be completed with lay humans, allowing for both a bigger subject pool and less expenses, since we do not have to compensate highly trained domain experts. Human-grounded evaluation is most appropriate when one wishes to test more general notions of the quality of an explanation. For example, to study what kinds of explanations are best understood under severe time constraints, one might create abstract tasks in which other factorsâsuch as the overall task complexityâcan be controlled [Kim et al., 2013, Lakkaraju et al., 2016]
The key question, of course, is how we can evaluate the quality of an explanation without a speciï¬c end-goal (such as identifying errors in a safety-oriented task or identifying relevant patterns in a science-oriented task). Ideally, our evaluation approach will depend only on the quality of the explanation, regardless of whether the explanation is the model itself or a post-hoc interpretation of a black-box model, and regardless of the correctness of the associated prediction. Examples of potential experiments include: | 1702.08608#15 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 15 | Collections of warps. A user-conï¬gurable collection of 1 to 32 warps comprises a block or a co-operative thread ar- ray (CTA). Each block has a high speed shared memory, up to 48 KiB in size. Individual CUDA threads have a block- relative ID, called a thread id, which can be used to parti- tion and assign work. Each block is run on a single core of the GPU called a streaming multiprocessor (SM). Each SM has functional units, including ALUs, memory load/store units, and various special instruction units. A GPU hides execution latencies by having many operations in ï¬ight on warps across all SMs. Each individual warp lane instruction throughput is low and latency is high, but the aggregate arithmetic throughput of all SMs together is 5 â 10à higher than typical CPUs.
Grids and kernels. Blocks are organized in a grid of blocks in a kernel. Each block is assigned a grid relative ID. The kernel is the unit of work (instruction stream with argu- ments) scheduled by the host CPU for the GPU to execute. After a block runs through to completion, new blocks can be scheduled. Blocks from diï¬erent kernels can run concur- rently. Ordering between kernels is controllable via ordering primitives such as streams and events. | 1702.08734#15 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 16 | ⢠Binary forced choice: humans are presented with pairs of explanations, and must choose the one that they ï¬nd of higher quality (basic face-validity test made quantitative).
⢠Forward simulation/prediction: humans are presented with an explanation and an input, and must correctly simulate the modelâs output (regardless of the true output).
⢠Counterfactual simulation: humans are presented with an explanation, an input, and an output, and are asked what must be changed to change the methodâs prediction to a desired output (and related variants).
Here is a concrete example. The common intrusion-detection test [Chang et al., 2009] in topic models is a form of the forward simulation/prediction task: we ask the human to ï¬nd the diï¬erence between the modelâs true output and some corrupted output as a way to determine whether the human has correctly understood what the modelâs true output is.
# 3.3 Functionally-grounded Evaluation: No humans, proxy tasks | 1702.08608#16 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 16 | Resources and occupancy. The number of blocks execut- ing concurrently depends upon shared memory and register resources used by each block. Per-CUDA thread register us- age is determined at compilation time, while shared memory usage can be chosen at runtime. This usage aï¬ects occu- pancy on the GPU. If a block demands all 48 KiB of shared memory for its private usage, or 128 registers per thread as
3
opposed to 32, then only 1 â 2 other blocks can run concur- rently on the same SM, resulting in low occupancy. Under high occupancy more blocks will be present across all SMs, allowing more work to be in ï¬ight at once. | 1702.08734#16 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 17 | # 3.3 Functionally-grounded Evaluation: No humans, proxy tasks
Functionally-grounded evaluation requires no human experiments; instead, it uses some formal deï¬nition of interpretability as a proxy for explanation quality. Such experiments are appealing because even general human-subject experiments require time and costs both to perform and to get necessary approvals (e.g., IRBs), which may be beyond the resources of a machine learning researcher. Functionally-grounded evaluations are most appropriate once we have a class of models or regularizers that have already been validated, e.g. via human-grounded experiments. They may also be appropriate when a method is not yet mature or when human subject experiments are unethical.
5
The challenge, of course, is to determine what proxies to use. For example, decision trees have been considered interpretable in many situations [Freitas, 2014]. In section 4, we describe open problems in determining what proxies are reasonable. Once a proxy has been formalized, the challenge is squarely an optimization problem, as the model class or regularizer is likely to be discrete, non-convex and often non-diï¬erentiable. Examples of experiments include
⢠Show the improvement of prediction performance of a model that is already proven to be interpretable (assumes that someone has run human experiments to show that the model class is interpretable). | 1702.08608#17 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 17 | Memory types. Diï¬erent blocks and kernels communicate through global memory, typically 4 â 32 GB in size, with 5 â 10à higher bandwidth than CPU main memory. Shared memory is analogous to CPU L1 cache in terms of speed. GPU register ï¬le memory is the highest bandwidth memory. In order to maintain the high number of instructions in ï¬ight on a GPU, a vast register ï¬le is also required: 14 MB in the latest Pascal P100, in contrast with a few tens of KB on CPU. A ratio of 250 : 6.25 : 1 for register to shared to global memory aggregate cross-sectional bandwidth is typical on GPU, yielding 10 â 100s of TB/s for the register ï¬le [10].
# 3.2 GPU register ï¬le usage
Structured register data. Shared and register memory usage involves eï¬ciency tradeoï¬s; they lower occupancy but can increase overall performance by retaining a larger work- ing set in a faster memory. Making heavy use of register- resident data at the expense of occupancy or instead of shared memory is often proï¬table [43]. | 1702.08734#17 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 18 | ⢠Show the improvement of prediction performance of a model that is already proven to be interpretable (assumes that someone has run human experiments to show that the model class is interpretable).
⢠Show that oneâs method performs better with respect to certain regularizersâfor example, is more sparseâcompared to other baselines (assumes someone has run human experiments to show that the regularizer is appropriate).
# 4 Open Problems in the Science of Interpretability, Theory and Practice
It is essential that the three types of evaluation in the previous section inform each other: the factors that capture the essential needs of real world tasks should inform what kinds of simpliï¬ed tasks we perform, and the performance of our methods with respect to functional proxies should reï¬ect their performance in real-world settings. In this section, we describe some important open problems for creating these links between the three types of evaluations:
1. What proxies are best for what real-world applications? (functionally to application-grounded)
2. What are the important factors to consider when designing simpler tasks that maintain the essence of the real end-task? (human to application-grounded)
3. What are the important factors to consider when characterizing proxies for explanation qual- ity? (human to functionally-grounded)
Below, we describe a path to answering each of these questions.
# 4.1 Data-driven approach to discover factors of interpretability | 1702.08608#18 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 18 | As the GPU register ï¬le is very large, storing structured data (not just temporary operands) is useful. A single lane can use its (scalar) registers to solve a local task, but with limited parallelism and storage. Instead, lanes in a GPU warp can instead exchange register data using the warp shuf- ï¬e instruction, enabling warp-wide parallelism and storage.
Lane-stride register array. A common pattern to achieve this is a lane-stride register array. That is, given elements [ai]i=o:e, each successive value is held in a register by neigh- boring lanes. The array is stored in ¢/32 registers per lane, with £a multiple of 32. Lane j stores {a;, 4324), -.-, 43245}, while register r holds {a32;, @32r41, ---; @32r+31 }For manipulating the [ai], the register in which a; is stored (i.e., [¢/32]) and @ must be known at assembly time, while the lane (i.e., i mod 32) can be runtime knowledge. A wide variety of access patterns (shift, any-to-any) are provided; we use the butterfly permutation extensively.
# 3.3 k-selection on CPU versus GPU | 1702.08734#18 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 19 | Below, we describe a path to answering each of these questions.
# 4.1 Data-driven approach to discover factors of interpretability
Imagine a matrix where rows are speciï¬c real-world tasks, columns are speciï¬c methods, and the entries are the performance of the method on the end-task. For example, one could represent how well a decision tree of depth less than 4 worked in assisting doctors in identifying pneumonia patients under age 30 in US. Once constructed, methods in machine learning could be used to identify latent dimensions that represent factors that are important to interpretability. This approach is similar to eï¬orts to characterize classiï¬cation [Ho and Basu, 2002] and clustering problems [Garg and Kalai, 2016]. For example, one might perform matrix factorization to embed both tasks and methods respectively in low-dimensional spaces (which we can then seek to interpret), as shown in Figure 2. These embeddings could help predict what methods would be most promising for a new problem, similarly to collaborative ï¬ltering.
The challenge, of course, is in creating this matrix. For example, one could imagine creating a repository of clinical cases in which the ML system has access to the patientâs record but not certain
6
methods K methods domain ~N f( domain 5 _)
Figure 2: An example of data-driven approach to discover factors in interpretability | 1702.08608#19 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 19 | # 3.3 k-selection on CPU versus GPU
k-selection algorithms, often for arbitrarily large £ and k, can be translated to a GPU, including radiz_ selection and bucket selection (1], probabilistic selection [33], quick- , and truncated sorts |. Their performance is dominated by multiple passes over the input in global mem- ory. Sometimes for similarity search, the input distances are computed on-the-fly or stored only in small blocks, not in their entirety. The full, explicit array might be too large to fit into any memory, and its size could be unknown at the start of the processing, rendering algorithms that require multiple passes impractical. They suffer from other issues as well. Quickselect requires partitioning on a storage of size O(â¬), a data-dependent memory movement. This can result in excessive memory transactions, or requiring parallel prefix sums to determine write offsets, with synchronization overhead. Radix selection has no partitioning but multiple passes are still required.
Heap parallelism. In similarity search applications, one is usually interested only in a small number of results, k < | 1702.08734#19 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 20 | 6
methods K methods domain ~N f( domain 5 _)
Figure 2: An example of data-driven approach to discover factors in interpretability
current features that are only accessible to the clinician, or a repository of discrimination-in-loan cases where the ML system must provide outputs that assist a lawyer in their decision. Ideally these would be linked to domain experts who have agreed to be employed to evaluate methods when applied to their domain of expertise. Just as there are now large open repositories for problems in classiï¬cation, regression, and reinforcement learning [Blake and Merz, 1998, Brockman et al., 2016, Vanschoren et al., 2014], we advocate for the creation of repositories that contain problems corresponding to real-world tasks in which human-input is required. Creating such repositories will be more challenging than creating collections of standard machine learning datasets because they must include a system for human assessment, but with the availablity of crowdsourcing tools these technical challenges can be surmounted. | 1702.08608#20 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 20 | Heap parallelism. In similarity search applications, one is usually interested only in a small number of results, k <
1000 or so. In this regime, selection via max-heap is a typi- cal choice on the CPU, but heaps do not expose much data parallelism (due to serial tree update) and cannot saturate SIMD execution units. The ad-heap [31] takes better advan- tage of parallelism available in heterogeneous systems, but still attempts to partition serial and parallel work between appropriate execution units. Despite the serial nature of heap update, for small k the CPU can maintain all of its state in the L1 cache with little eï¬ort, and L1 cache latency and bandwidth remains a limiting factor. Other similarity search components, like PQ code manipulation, tend to have greater impact on CPU performance [2].
GPU heaps. Heaps can be similarly implemented on a GPU [7]. However, a straightforward GPU heap implemen- tation suï¬ers from high warp divergence and irregular, data- dependent memory movement, since the path taken for each inserted element depends upon other values in the heap. | 1702.08734#20 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 21 | In practice, constructing such a matrix will be expensive since each cell must be evaluated in the context of a real application, and interpreting the latent dimensions will be an iterative eï¬ort of hypothesizing why certain tasks or methods share dimensions and then checking whether our hypotheses are true. In the next two open problems, we lay out some hypotheses about what latent dimensions may correspond to; these hypotheses can be tested via much less expensive human- grounded evaluations on simulated tasks.
# 4.2 Hypothesis: task-related latent dimensions of interpretability
Disparate-seeming applications may share common categories: an application involving preventing medical error at the bedside and an application involving support for identifying inappropriate language on social media might be similar in that they involve making a decision about a speciï¬c caseâa patient, a postâin a relatively short period of time. However, when it comes to time constraints, the needs in those scenarios might be diï¬erent from an application involving the un- derstanding of the main characteristics of a large omics data set, where the goalâscienceâis much more abstract and the scientist may have hours or days to inspect the model outputs.
Below, we list a (non-exhaustive!) set of hypotheses about what might make tasks similar in their explanation needs: | 1702.08608#21 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 21 | GPU parallel priority queues [24] improve over the serial heap update by allowing multiple concurrent updates, but they require a potential number of small sorts for each insert and data-dependent memory movement. Moreover, it uses multiple synchronization barriers through kernel launches in diï¬erent streams, plus the additional latency of successive kernel launches and coordination with the CPU host.
Other more novel GPU algorithms are available for small k, namely the selection algorithm in the fgknn library [41]. This is a complex algorithm that may suï¬er from too many synchronization points, greater kernel launch overhead, us- age of slower memories, excessive use of hierarchy, partition- ing and buï¬ering. However, we take inspiration from this particular algorithm through the use of parallel merges as seen in their merge queue structure.
# 4. FAST K-SELECTION ON THE GPU
For any CPU or GPU algorithm, either memory or arith- metic throughput should be the limiting factor as per the rooï¬ine performance model [48]. For input from global mem- ory, k-selection cannot run faster than the time required to scan the input once at peak memory bandwidth. We aim to get as close to this limit as possible. Thus, we wish to per- form a single pass over the input data (from global memory or produced on-the-ï¬y, perhaps fused with a kernel that is generating the data). | 1702.08734#21 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 22 | Below, we list a (non-exhaustive!) set of hypotheses about what might make tasks similar in their explanation needs:
⢠Global vs. Local. Global interpretability implies knowing what patterns are present in general (such as key features governing galaxy formation), while local interpretability implies knowing the reasons for a speciï¬c decision (such as why a particular loan application was rejected). The former may be important for when scientiï¬c understanding or bias detection is the goal; the latter when one needs a justiï¬cation for a speciï¬c decision.
⢠Area, Severity of Incompleteness. What part of the problem formulation is incomplete, and how incomplete is it? We hypothesize that the types of explanations needed may vary de- pending on whether the source of concern is due to incompletely speciï¬ed inputs, constraints,
7 | 1702.08608#22 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 22 | We want to keep intermediate state in the fastest memory: the register ï¬le. The major disadvantage of register memory is that the indexing into the register ï¬le must be known at assembly time, which is a strong constraint on the algorithm.
# In-register sorting
We use an in-register sorting primitive as a building block. Sorting networks are commonly used on SIMD architec- tures [13], as they exploit vector parallelism. They are eas- ily implemented on the GPU, and we build sorting networks with lane-stride register arrays.
We use a variant of Batcherâs bitonic sorting network sl. which is a set of parallel merges on an array of size 2". Each merge takes s arrays of length t (s and t a power of 2) to s/2 arrays of length 2¢, using log,(t) parallel steps. A bitonic sort applies this merge recursively: to sort an array of length é, merge @ arrays of length 1 to ¢/2 arrays of length 2, to £/4 arrays of length 4, successively to 1 sorted array of length @, leading to $(log,(¢)? + log,(¢)) parallel merge steps.
4
Algorithm 1 Odd-size merging network | 1702.08734#22 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 23 | 7
domains, internal model structure, costs, or even in the need to understand the training al- gorithm. The severity of the incompleteness may also aï¬ect explanation needs. For example, one can imagine a spectrum of questions about the safety of self-driving cars. On one end, one may have general curiosity about how autonomous cars make decisions. At the other, one may wish to check a speciï¬c list of scenarios (e.g., sets of sensor inputs that causes the car to drive oï¬ of the road by 10cm). In between, one might want to check a general propertyâsafe urban drivingâwithout an exhaustive list of scenarios and safety criteria.
⢠Time Constraints. How long can the user aï¬ord to spend to understand the explanation? A decision that needs to be made at the bedside or during the operation of a plant must be understood quickly, while in scientiï¬c or anti-discrimination applications, the end-user may be willing to spend hours trying to fully understand an explanation. | 1702.08608#23 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 23 | function MERGE-ODD((Li]i=0:¢, , [Ri]i=o:ep ) parallel for i + 0: min(éz, zg) do > inverted 1st stage; inputs are already sorted COMPARE-SWAP(L¢, ~iâ1, Ri) end for parallel do > If £p = â¬p and a power-of-2, these are equivalent MERGE-ODD-CONTINUE(([Li]i=0:¢,, left) MERGE-ODD-CONTINUE([Ri]i=o:¢,, right) end do end function function MERGE-ODD-CONTINUE(([2i]i=0:¢, P) if â¬>1 then he Qileg2 1-1 > largest power-of-2 < ¢ parallel for i+ 0:âhdo > Implemented with warp shuffle butterfly COMPARE-SWAP(2i, Li+h) end for parallel do if p = left then > left side recursion MERGE-ODD-CONTINUE((2;]i=0:¢âh, Left) MERGE-ODD-CONTINUE(([;]i=¢ân:¢, Fight) else > right side recursion | 1702.08734#23 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 24 | ⢠Nature of User Expertise. How experienced is the user in the task? The userâs experience will aï¬ect what kind of cognitive chunks they have, that is, how they organize individual elements of information into collections [Neath and Surprenant, 2003]. For example, a clinician may have a notion that autism and ADHD are both developmental diseases. The nature of the userâs expertise will also inï¬uence what level of sophistication they expect in their explana- tions. For example, domain experts may expect or prefer a somewhat larger and sophisticated modelâwhich conï¬rms facts they knowâover a smaller, more opaque one. These preferences may be quite diï¬erent from hospital ethicist who may be more narrowly concerned about whether decisions are being made in an ethical manner. More broadly, decison-makers, sci- entists, compliance and safety engineers, data scientists, and machine learning researchers all come with diï¬erent background knowledge and communication styles.
Each of these factors can be isolated in human-grounded experiments in simulated tasks to deter- mine which methods work best when they are present.
# 4.3 Hypothesis: method-related latent dimensions of interpretability | 1702.08608#24 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08608 | 25 | # 4.3 Hypothesis: method-related latent dimensions of interpretability
Just as disparate applications may share common categories, disparate methods may share common qualities that correlate to their utility as explanation. As before, we provide a (non-exhaustive!) set of factors that may correspond to diï¬erent explanation needs: Here, we deï¬ne cognitive chunks to be the basic units of explanation.
⢠Form of cognitive chunks. What are the basic units of the explanation? Are they raw features? Derived features that have some semantic meaning to the expert (e.g. âneurological disorderâ for a collection of diseases or âchairâ for a collection of pixels)? Prototypes?
⢠Number of cognitive chunks. How many cognitive chunks does the explanation contain? How does the quantity interact with the type: for example, a prototype can contain a lot more information than a feature; can we handle them in similar quantities?
⢠Level of compositionality. Are the cognitive chunks organized in a structured way? Rules, hierarchies, and other abstractions can limit what a human needs to process at one time. For example, part of an explanation may involve deï¬ning a new unit (a chunk) that is a function of raw units, and then providing an explanation in terms of that new unit.
8 | 1702.08608#25 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 25 | # end if end function
Odd-size merging and sorting networks. If some input data is already sorted, we can modify the network to avoid merging steps. We may also not have a full power-of-2 set of data, in which case we can eï¬ciently shortcut to deal with the smaller size.
Algorithm 1 is an odd-sized merging network that merges already sorted left and right arrays, each of arbitrary length. While the bitonic network merges bitonic sequences, we start with monotonic sequences: sequences sorted monotonically. A bitonic merge is made monotonic by reversing the ï¬rst comparator stage.
The odd size algorithm is derived by considering arrays to be padded to the next highest power-of-2 size with dummy
GBT4 o[3T7]. step 1 step 2 step 3 step 4
Figure 1: Odd-size network merging arrays of sizes 5 and 3. Bullets indicate parallel compare/swap. Dashed lines are elided elements or comparisons.
input thread queue warp queue ao : [esa ââ>} T)0 «+e Teak Wo Waa lane 0 i insertion : : a Tvs T z lane 1 : fs... rd TLik> 2 Wy lane i 3 ; coalesced sk: read z : i ES : ; bag [Tp TE] War We-1) lane 31 fac. | 1702.08734#25 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 26 | 8
⢠Monotonicity and other interactions between cognitive chunks. Does it matter if the cognitive chunks are combined in linear or nonlinear ways? In monotone ways [Gupta et al., 2016]? Are some functions more natural to humans than others [Wilson et al., 2015, Schulz et al., 2016]?
⢠Uncertainty and stochasticity. How well do people understand uncertainty measures? To what extent is stochasticity understood by humans?
# 5 Conclusion: Recommendations for Researchers
In this work, we have laid the groundwork for a process to rigorously deï¬ne and evaluate inter- pretability. There are many open questions in creating the formal links between applications, the science of human understanding, and more traditional machine learning regularizers. In the mean time, we encourage the community to consider some general principles. | 1702.08608#26 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 26 | Figure 2: Overview of WarpSelect. The input val- ues stream in on the left, and the warp queue on the right holds the output result.
elements that are never swapped (the merge is monotonic) and are already properly positioned; any comparisons with dummy elements are elided. A left array is considered to be padded with dummy elements at the start; a right ar- ray has them at the end. A merge of two sorted arrays length £, and ép to a sorted array of ¢; + &r requires log, (max(¢z, £r))] +1 parallel steps. =0 ri parallel steps.
The compare-swap is implemented using warp shuï¬es on a lane-stride register array. Swaps with a stride a multiple of 32 occur directly within a lane as the lane holds both elements locally. Swaps of stride ⤠16 or a non-multiple of 32 occur with warp shuï¬es. In practice, used array lengths are multiples of 32 as they are held in lane-stride arrays. | 1702.08734#26 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 27 | The claim of the research should match the type of the evaluation. Just as one would be critical of a reliability-oriented paper that only cites accuracy statistics, the choice of evaluation should match the speciï¬city of the claim being made. A contribution that is focused on a particular application should be expected to be evaluated in the context of that application (application- grounded evaluation), or on a human experiment with a closely-related task (human-grounded evaluation). A contribution that is focused on better optimizing a model class for some deï¬nition of interpretability should be expected to be evaluated with functionally-grounded metrics. As a community, we must be careful in the work on interpretability, both recognizing the need for and the costs of human-subject experiments.
In section 4, we hypothesized factors that may be the latent dimensions of interpretability. Creating a shared language around such factors is essential not only to evaluation, but also for the citation and comparison of related work. For example, work on creating a safe healthcare agent might be framed as focused on the need for explanation due to unknown inputs at the local scale, evaluated at the level of an application. In contrast, work on learning sparse linear models might also be framed as focused on the need for explanation due to unknown inputs, but this time evaluated at global scale. As we share each of our work with the community, we can do each other a service by describing factors such as | 1702.08608#27 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 27 | Algorithm 2 Odd-size sorting networ function SORT-ODD([z;i]i=0:¢) if £>1 then parallel do SORT-ODD((2iJi=0:|¢/2) ) SORT-ODD((2iJi=[¢/2]:0) end do MERGE-ODD( [ai] i=0:[¢/2); [@é]i=[e/2):0) end if end function
Algorithm|2]extends the merge to a full sort. Assuming no structure present in the input data, 4(log.(¢)]? + [log.(â¬)]) parallel steps are required for sorting data of length ¢.
# 4.2 WarpSelect
Our k-selection implementation, WARPSELECT, maintains state entirely in registers, requires only a single pass over data and avoids cross-warp synchronization. It uses MERGE- ODD and SORT-ODD as primitives. Since the register file pro- vides much more storage than shared memory, it supports k < 1024. Each warp is dedicated to k-selection to a single one of the n arrays [aj]. If n is large enough, a single warp per each [a;] will result in full GPU occupancy. Large £ per warp is handled by recursive decomposition, if £ is known in advance. | 1702.08734#27 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 28 | 1. How is the problem formulation incomplete? (Section 2)
2. At what level is the evaluation being performed? (application, general user study, proxy; Section 3)
3. What are task-related relevant factors? (e.g. global vs. local, severity of incompleteness, level of user expertise, time constraints; Section 4.2)
4. What are method-related relevant factors being explored? (e.g. form of cognitive chunks, number of cognitive chunks, compositionality, monotonicity, uncertainty; Section 4.3)
and of course, adding and reï¬ning these factors as our taxonomies evolve. These considerations should move us away from vague claims about the interpretability of a particular model and toward classifying applications by a common set of terms.
9
Acknowledgments This piece would not have been possible without the dozens of deep conver- sations about interpretability with machine learning researchers and domain experts. Our friends and colleagues, we appreciate your support. We want to particularity thank Ian Goodfellow, Kush Varshney, Hanna Wallach, Solon Barocas, Stefan Rping and Jesse Johnson for their feedback.
# References
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Man´e. Concrete problems in AI safety. arXiv preprint arXiv:1606.06565, 2016. | 1702.08608#28 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 28 | Overview. Our approach (Algorithm B]and Figure[2) oper- ates on values, with associated indices carried along (omit- ted from the description for simplicity). It selects the k least values that come from global memory, or from intermediate value registers if fused into another kernel providing the val- ues. Let [ai]i=o:¢ be the sequence provided for selection.
5
The elements (on the left of Figure 2) are processed in groups of 32, the warp size. Lane j is responsible for pro- cessing {aj, a32+j, ...}; thus, if the elements come from global memory, the reads are contiguous and coalesced into a min- imal number of memory transactions. | 1702.08734#28 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 29 | Pedro Antunes, Valeria Herskovic, Sergio F Ochoa, and Jose A Pino. Structuring dimensions for collaborative systems evaluation. ACM Computing Surveys, 2012.
William Bechtel and Adele Abrahamsen. Explanation: A mechanist alternative. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 2005.
Catherine Blake and Christopher J Merz. {UCI} repository of machine learning databases. 1998.
Nick Bostrom and Eliezer Yudkowsky. The ethics of artiï¬cial intelligence. The Cambridge Handbook of Artiï¬cial Intelligence, 2014.
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
Cristian Bucilu, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2006. | 1702.08608#29 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 29 | Data structures. Each lane j maintains a small queue of t elements in registers, called the thread queues [T j i ]i=0:t, ordered from largest to smallest (T j i+1). The choice of t is made relative to k, see Section 4.3. The thread queue is a ï¬rst-level ï¬lter for new values coming in. If a new a32i+j is greater than the largest key currently in the queue, T j 0 , it is guaranteed that it wonât be in the k smallest ï¬nal results. The warp shares a lane-stride register array of k smallest seen elements, [Wi]i=0:k, called the warp queue. It is ordered from smallest to largest (Wi ⤠Wi+1); if the requested k is not a multiple of 32, we round it up. This is a second level data structure that will be used to maintain all of the k smallest warp-wide seen values. The thread and warp queues are initialized to maximum sentinel values, e.g., +â.
Update. The three invariants maintained are:
⢠all per-lane T j 0 are not in the min-k
⢠all per-lane T j 0 are greater than all warp queue keys Wi | 1702.08734#29 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 30 | Samuel Carton, Jennifer Helsby, Kenneth Joseph, Ayesha Mahmud, Youngsoo Park, Joe Walsh, Identifying police In ACM SIGKDD International Conference on Knowledge Crystal Cody, CPT Estella Patterson, Lauren Haynes, and Rayid Ghani. oï¬cers at risk of adverse events. Discovery and Data Mining. ACM, 2016.
Jonathan Chang, Jordan L Boyd-Graber, Sean Gerrish, Chong Wang, and David M Blei. Reading tea leaves: How humans interpret topic models. In NIPS, 2009.
Nick Chater and Mike Oaksford. Speculations on human causal learning and reasoning. Information sampling and adaptive cognition, 2006.
Finale Doshi-Velez, Yaorong Ge, and Isaac Kohane. Comorbidity clusters in autism spectrum disorders: an electronic health record time-series analysis. Pediatrics, 133(1):e54âe63, 2014.
Finale Doshi-Velez, Byron Wallace, and Ryan Adams. Graph-sparse lda: a topic model with structured sparsity. Association for the Advancement of Artiï¬cial Intelligence, 2015.
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Innovations in Theoretical Computer Science Conference. ACM, 2012.
10 | 1702.08608#30 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 30 | Update. The three invariants maintained are:
⢠all per-lane T j 0 are not in the min-k
⢠all per-lane T j 0 are greater than all warp queue keys Wi
⢠all ai seen so far in the min-k are contained in either i ]i=0:t,j=0:32), or in the some laneâs thread queue ([T j warp queue.
Lane j receives a new a32i+j and attempts to insert it into 0 , then the new pair is by its thread queue. If a32i+j > T j deï¬nition not in the k minimum, and can be rejected.
Otherwise, it is inserted into its proper sorted position in the thread queue, thus ejecting the old T j 0 . All lanes complete doing this with their new received pair and their thread queue, but it is now possible that the second invariant have been violated. Using the warp ballot instruction, we determine if any lane has violated the second invariant. If not, we are free to continue processing new elements.
Restoring the invariants. If any lane has its invariant violated, then the warp uses odd-merge to merge and sort the thread and warp queues together. The new warp queue | 1702.08734#30 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 31 | 10
Alex Freitas. Comprehensible classiï¬cation models: a position paper. ACM SIGKDD Explorations, 2014.
Vikas K Garg and Adam Tauman Kalai. Meta-unsupervised-learning: A supervised approach to unsupervised learning. arXiv preprint arXiv:1612.09030, 2016.
Stuart Glennan. Rethinking mechanistic explanation. Philosophy of science, 2002.
Bryce Goodman and Seth Flaxman. European union regulations on algorithmic decision-making and aâ right to explanationâ. arXiv preprint arXiv:1606.08813, 2016.
Maya Gupta, Andrew Cotter, Jan Pfeifer, Konstantin Voevodski, Kevin Canini, Alexander Mangylov, Wojciech Moczydlowski, and Alexander Van Esbroeck. Monotonic calibrated in- terpolated look-up tables. Journal of Machine Learning Research, 2016. | 1702.08608#31 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 31 | Restoring the invariants. If any lane has its invariant violated, then the warp uses odd-merge to merge and sort the thread and warp queues together. The new warp queue
Algorithm 3 WARPSELECT pseudocode for lane j function WARPSELECT(a) if a< Tj then insert a into our [T?i=o:¢ end if if WARP-BALLOT(T) < W,-1) then > Reinterpret thread queues as lane-stride array [ai]io:32¢ - cast ([T? ]i=0:t,j)=0:32) > concatenate and sort thread queues SORT-ODD([aii]i=0:32¢) MERGE-ODD([W,]i=0:k; [@i]i=0:32¢) > Reinterpret lane-stride array as thread queues [T?]i=0:t,j=0:32 - CAST ([ai]i=0:32¢) REVERSE-ARRAY ([T;]i=0:) > Back in thread queue order, invariant restored end if end function | 1702.08734#31 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 32 | Sean Hamill. CMU computer won poker battle over humans by statistically signiï¬cant mar- http://www.post-gazette.com/business/tech-news/2017/01/31/CMU-computer- gin. won-poker-battle-over-humans-by-statistically-significant-margin/stories/ 201701310250, 2017. Accessed: 2017-02-07.
Moritz Hardt and Kunal Talwar. On the geometry of diï¬erential privacy. In ACM Symposium on Theory of Computing. ACM, 2010.
Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems, 2016. In
Carl Hempel and Paul Oppenheim. Studies in the logic of explanation. Philosophy of science, 1948.
Tin Kam Ho and Mitra Basu. Complexity measures of supervised classiï¬cation problems. IEEE transactions on pattern analysis and machine intelligence, 2002.
Frank Keil. Explanation and understanding. Annu. Rev. Psychol., 2006.
Frank Keil, Leonid Rozenblit, and Candice Mills. What lies beneath? understanding the limits of understanding. Thinking and seeing: Visual metacognition in adults and children, 2004. | 1702.08608#32 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 32 | will be the min-k elements across the merged, sorted queues, and the new thread queues will be the remainder, from min- (k + 1) to min-(k + 32t + 1). This restores the invariants and we are free to continue processing subsequent elements.
Since the thread and warp queues are already sorted, we merge the sorted warp queue of length k with 32 sorted arrays of length t. Supporting odd-sized merges is important because Batcherâs formulation would require that 32t = k and is a power-of-2; thus if k = 1024, t must be 32. We found that the optimal t is way smaller (see below).
Using odd-merge to merge the 32 already sorted thread queues would require a struct-of-arrays to array-of-structs transposition in registers across the warp, since the t succes- sive sorted values are held in diï¬erent registers in the same lane rather than a lane-stride array. This is possible [12], but would use a comparable number of warp shuï¬es, so we just reinterpret the thread queue registers as an (unsorted) lane-stride array and sort from scratch. Signiï¬cant speedup is realizable by using odd-merge for the merge of the ag- gregate sorted thread queues with the warp queue. | 1702.08734#32 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 33 | Been Kim, Caleb Chacha, and Julie Shah. Inferring robot task plans from human team meetings: A generative modeling approach with logic-based prior. Association for the Advancement of Artiï¬cial Intelligence, 2013.
Been Kim, Elena Glassman, Brittney Johnson, and Julie Shah. model empowering humans via intuitive interaction. 2015a. iBCM: Interactive bayesian case
Been Kim, Julie Shah, and Finale Doshi-Velez. Mind the gap: A generative approach to inter- pretable feature selection and extraction. In Advances in Neural Information Processing Systems, 2015b.
Himabindu Lakkaraju, Stephen H Bach, and Jure Leskovec. Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD Interna- tional Conference on Knowledge Discovery and Data Mining, pages 1675â1684. ACM, 2016.
11
Jonathan Lazar, Jinjuan Heidi Feng, and Harry Hochheiser. Research methods in human-computer interaction. John Wiley & Sons, 2010.
Tao Lei, Regina Barzilay, and Tommi Jaakkola. Rationalizing neural predictions. arXiv preprint arXiv:1606.04155, 2016. | 1702.08608#33 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 33 | Handling the remainder. If there are remainder elements ecause @ is not a multiple of 32, those are inserted into the thread queues for the lanes that have them, after which we proceed to the output stage.
Output. A ï¬nal sort and merge is made of the thread and warp queues, after which the warp queue holds all min-k values.
# 4.3 Complexity and parameter selection
For each incoming group of 32 elements, WarpSelect can perform 1, 2 or 3 constant-time operations, all happen- ing in warp-wide parallel time:
1. read 32 elements, compare to all thread queue heads T j 0 , cost C1, happens N1 times;
0 , perform insertion sort on those speciï¬c thread queues, cost C2 = O(t), hap- pens N2 times;
0 < Wkâ1, sort and merge queues, cost C3 = O(t log(32t)2 + k log(max(k, 32t))), happens N3 times. | 1702.08734#33 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 34 | Tania Lombrozo. The structure and function of explanations. Trends in cognitive sciences, 10(10): 464â470, 2006.
Yin Lou, Rich Caruana, and Johannes Gehrke. Intelligible models for classiï¬cation and regression. In ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2012.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
Ian Neath and Aimee Surprenant. Human Memory. 2003.
Clemens Otte. Safe and interpretable machine learning: A methodological review. In Computational Intelligence in Intelligent Data Analysis. Springer, 2013.
Parliament and Council of the European Union. General data protection regulation. 2016.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. âwhy should i trust you?â: Explaining the predictions of any classiï¬er. arXiv preprint arXiv:1602.04938, 2016.
Salvatore Ruggieri, Dino Pedreschi, and Franco Turini. Data mining for discrimination discovery. ACM Transactions on Knowledge Discovery from Data, 2010. | 1702.08608#34 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 34 | Thus, the total cost is NiC, + N2C2 + N3C3. Ny = ¢/32, and on random data drawn independently, N2 = O(k log(é)) and N3 = O(klog(é)/t), see the Appendix for a full deriva- tion. Hence, the trade-off is to balance a cost in N2C2 and one in N3C3. The practical choice for t given k and £ was made by experiment on a variety of k-NN data. For k < 32, we use t = 2, k < 128 uses t = 3, k < 256 uses t = 4, and k < 1024 uses t = 8, all irrespective of ¢.
# 5. COMPUTATION LAYOUT
This section explains how IVFADC, one of the indexing methods originally built upon product quantization [25], is implemented eï¬ciently. Details on distance computations and articulation with k-selection are the key to understand- ing why this method can outperform more recent GPU- compliant approximate nearest neighbor strategies [47].
# 5.1 Exact search
We brieï¬y come back to the exhaustive search method, often referred to as exact brute-force. It is interesting on its
6 | 1702.08734#34 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 35 | Salvatore Ruggieri, Dino Pedreschi, and Franco Turini. Data mining for discrimination discovery. ACM Transactions on Knowledge Discovery from Data, 2010.
Eric Schulz, Joshua Tenenbaum, David Duvenaud, Maarten Speekenbrink, and Samuel Gershman. Compositional inductive biases in function learning. bioRxiv, 2016.
D Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Fran¸cois Crespo, and Dan Dennison. Hidden technical debt in machine learning systems. In Advances in Neural Information Processing Systems, 2015.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 2016.
Lior Jacob Strahilevitz. Privacy versus antidiscrimination. University of Chicago Law School Working Paper, 2008. | 1702.08608#35 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 35 | # 5.1 Exact search
We brieï¬y come back to the exhaustive search method, often referred to as exact brute-force. It is interesting on its
6
own for exact nearest neighbor search in small datasets. It is also a component of many indexes in the literature. In our case, we use it for the IVFADC coarse quantizer q1.
As stated in Section the distance computation boils down to a matrix multiplication. We use optimized GEMM routines in the cuBLAS library to calculate the â2(x;, yi) term for L2 distance, resulting in a partial distance matrix Dâ. To complete the distance calculation, we use a fused k-selection kernel that adds the ||y;||? term to each entry of the distance matrix and immediately submits the value to k-selection in registers. The ||2;||? term need not be taken into account before k-selection. Kernel fusion thus allows for only 2 passes (GEMM write, k-select read) over Dâ, com- pared to other implementations that may require 3 or more. Row-wise k-selection is likely not fusable with a well-tuned GEMM kernel, or would result in lower overall efficiency. | 1702.08734#35 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 36 | Lior Jacob Strahilevitz. Privacy versus antidiscrimination. University of Chicago Law School Working Paper, 2008.
Adi Suissa-Peleg, Daniel Haehn, Seymour Knowles-Barley, Verena Kaynig, Thouis R Jones, Alyssa Wilson, Richard Schalek, Jeï¬ery W Lichtman, and Hanspeter Pï¬ster. Automatic neural recon- struction from petavoxel of electron microscopy data. Microscopy and Microanalysis, 2016.
Vincent Toubiana, Arvind Narayanan, Dan Boneh, Helen Nissenbaum, and Solon Barocas. Adnos- tic: Privacy preserving targeted advertising. 2010.
Joaquin Vanschoren, Jan N Van Rijn, Bernd Bischl, and Luis Torgo. Openml: networked science in machine learning. ACM SIGKDD Explorations Newsletter, 15(2):49â60, 2014.
12
Kush Varshney and Homa Alemzadeh. On the safety of machine learning: Cyber-physical systems, decision sciences, and data products. CoRR, 2016.
Fulton Wang and Cynthia Rudin. Falling rule lists. In AISTATS, 2015. | 1702.08608#36 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 36 | As Dâ does not fit in GPU memory for realistic problem sizes, the problem is tiled over the batch of queries, with tg < mq queries being run in a single tile. Each of the [ng/tg| tiles are independent problems, but we run two in parallel on different streams to better occupy the GPU, so the effec- tive memory requirement of D is O(2¢t,). The computation can similarly be tiled over ¢. For very large input coming from the CPU, we support buffering with pinned memory to overlap CPU to GPU copy with GPU compute.
# IVFADC indexing
PQ lookup tables. At its core, the IVFADC requires com- puting the distance from a vector to a set of product quanti- zation reproduction values. By developing Equation (6) for a database vector y, we obtain:
Iz â a(y)I3 = lz ay) -âe2y-a@))lz.
If we decompose the residual vectors left after q1 as:
yâuy) = [yt---y?] and (8)
a) ee (9)
then the distance is rewritten as:
\lx â a(y) 3 = lla? - D3 +. + [12 â a NII. (20) | 1702.08734#36 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08608 | 37 | Fulton Wang and Cynthia Rudin. Falling rule lists. In AISTATS, 2015.
Tong Wang, Cynthia Rudin, Finale Doshi-Velez, Yimin Liu, Erica Klampï¬, and Perry MacNeille. Bayesian rule sets for interpretable classiï¬cation. In International Conference on Data Mining, 2017.
Joseph Jay Williams, Juho Kim, Anna Raï¬erty, Samuel Maldonado, Krzysztof Z Gajos, Walter S Lasecki, and Neil Heï¬ernan. Axis: Generating explanations at scale with learnersourcing and machine learning. In ACM Conference on Learning@ Scale. ACM, 2016.
Andrew Wilson, Christoph Dann, Chris Lucas, and Eric Xing. The human kernel. In Advances in Neural Information Processing Systems, 2015.
13 | 1702.08608#37 | Towards A Rigorous Science of Interpretable Machine Learning | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning. | http://arxiv.org/pdf/1702.08608 | Finale Doshi-Velez, Been Kim | stat.ML, cs.AI, cs.LG | null | null | stat.ML | 20170228 | 20170302 | [
{
"id": "1606.04155"
},
{
"id": "1606.06565"
},
{
"id": "1602.04938"
},
{
"id": "1606.01540"
},
{
"id": "1612.09030"
},
{
"id": "1606.08813"
}
] |
1702.08734 | 37 | then the distance is rewritten as:
\lx â a(y) 3 = lla? - D3 +. + [12 â a NII. (20)
Each quantizer q1, ..., qb has 256 reproduction values, so when x and q1(y) are known all distances can be precom- puted and stored in tables T1, ..., Tb each of size 256 [25]. Computing the sum (10) consists of b look-ups and addi- tions. Comparing the cost to compute n distances:
⢠Explicit computation: n à d mutiply-adds;
⢠With lookup tables: 256 à d multiply-adds and n à b lookup-adds.
This is the key to the eï¬ciency of the product quantizer. In our GPU implementation, b is any multiple of 4 up to 64. The codes are stored as sequential groups of b bytes per vector within lists.
IVFADC lookup tables. When scanning over the ele- ments of the inverted list IL (where by deï¬nition q1(y) is constant), the look-up table method can be applied, as the query x and q1(y) are known. | 1702.08734#37 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 38 | Moreover, the computation of the tables T;...T, is fur- ther optimized [5]. The expression of ||aâq(y)||3 in Equation can be decomposed as:
ila2(..-)II2 + 2(qr(y), a2(..-)) + [le â aa (y)II2 -2 (x, a2(..)) « Se a a term 1 term 2 term 3 (11)
(11) The objective is to minimize inner loop computations. The computations we can do in advance and store in lookup tables are as follows:
⢠Term 1 is independent of the query. It can be precom- puted from the quantizers, and stored in a table T of size |C1| à 256 à b;
⢠Term 2 is the distance to q1âs reproduction value. It is thus a by-product of the ï¬rst-level quantizer q1;
⢠Term 3 can be computed independently of the inverted list. Its computation costs d à 256 multiply-adds. | 1702.08734#38 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 39 | ⢠Term 3 can be computed independently of the inverted list. Its computation costs d à 256 multiply-adds.
This decomposition is used to produce the lookup tables T1 . . . Tb used during the scan of the inverted list. For a single query, computing the Ï Ã b tables from scratch costs Ï Ã d à 256 multiply-adds, while this decomposition costs 256Ãd multiply-adds and Ï ÃbÃ256 additions. On the GPU, the memory usage of T can be prohibitive, so we enable the decomposition only when memory is a not a concern.
# 5.3 GPU implementation
Algorithm 4 summarizes the process as one would im- plement it on a CPU. The inverted lists are stored as two separate arrays, for PQ codes and associated IDs. IDs are resolved only if k-selection determines k-nearest member- ship. This lookup yields a few sparse memory reads in a large array, thus the IDs can optionally be stored on CPU for tiny performance cost. | 1702.08734#39 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 40 | List scanning. A kernel is responsible for scanning the Ï closest inverted lists for each query, and calculating the per- vector pair distances using the lookup tables Ti. The Ti are stored in shared memory: up to nq ÃÏ Ãmaxi |Ii|Ãb lookups are required for a query set (trillions of accesses in practice), and are random access. This limits b to at most 48 (32- bit ï¬oating point) or 96 (16-bit ï¬oating point) with current architectures. In case we do not use the decomposition of Equation (11), the Ti are calculated by a separate kernel before scanning.
Multi-pass kernels. Each nq Ã Ï pairs of query against inverted list can be processed independently. At one ex- treme, a block is dedicated to each of these, resulting in up to nq Ã Ï Ã maxi |Ii| partial results being written back to global memory, which is then k-selected to nq à k ï¬nal re- sults. This yields high parallelism but can exceed available GPU global memory; as with exact search, we choose a tile size tq ⤠nq to reduce memory consumption, bounding its complexity by O(2tqÏ maxi |Ii|) with multi-streaming. | 1702.08734#40 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 41 | A single warp could be dedicated to k-selection of each tq set of lists, which could result in low parallelism. We introduce a two-pass k-selection, reducing tq Ã Ï Ã maxi |Ii| to tq à f à k partial results for some subdivision factor f . This is reduced again via k-selection to the ï¬nal tqÃk results.
Fused kernel. As with exact search, we experimented with a kernel that dedicates a single block to scanning all Ï lists
7
for a single query, with k-selection fused with distance com- putation. This is possible as WarpSelect does not ï¬ght for the shared memory resource which is severely limited. This reduces global memory write-back, since almost all interme- diate results can be eliminated. However, unlike k-selection overhead for exact computation, a signiï¬cant portion of the runtime is the gather from the Ti in shared memory and lin- ear scanning of the Ii from global memory; the write-back is not a dominant contributor. Timing for the fused kernel is improved by at most 15%, and for some problem sizes would be subject to lower parallelism and worse performance with- out subsequent decomposition. Therefore, and for reasons of implementation simplicity, we do not use this layout.
Algorithm 4 IVFPQ batch search routine | 1702.08734#41 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 42 | Algorithm 4 IVFPQ batch search routine
function IVFPQ-SEARCH((21, ..., Lng]; Ti, Zye,)) for i + 0: nq do > batch quantization of Section[5 Live + T-argmin,¢¢, lle ⢠end for for i+ 0: nq do Led Compute term 3 (see Sectio: for L in Liyp do Compute distance tables 7}, ...,T for j in Z;, do > distance estimation, Equation d& jai â q(ys)I|3 Append (d, L,j) to L end for end for R; < k-select smallest distances d from L end for return R end function 2 > distance table > T loops
# 5.4 Multi-GPU parallelism
Modern servers can support several GPUs. We employ this capability for both compute power and memory.
Replication. If an index instance ï¬ts in the memory of a single GPU, it can be replicated across R diï¬erent GPUs. To query nq vectors, each replica handles a fraction nq/R of the queries, joining the results back together on a single GPU or in CPU memory. Replication has near linear speedup, except for a potential loss in eï¬ciency for small nq. | 1702.08734#42 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 43 | Sharding. If an index instance does not fit in the memory of a single GPU, an index can be sharded across S differ- ent GPUs. For adding ¢ vectors, each shard receives ¢/S of the vectors, and for query, each shard handles the full query set Nq, joining the partial results (an additional round of k- selection is still required) on a single GPU or in CPU mem- ory. For a given index size ¢, sharding will yield a speedup (sharding has a query of ng against ¢/S versus replication with a query of ng/R against @), but is usually less than pure replication due to fixed overhead and cost of subse- quent k-selection.
Replication and sharding can be used together (S shards, each with R replicas for S Ã R GPUs in total). Sharding or replication are both fairly trivial, and the same principle can be used to distribute an index across multiple machines.
100 F ° runtime (ms) truncated bitonic sort fgknn select ââ WarpSelect â*â memory bandwidth limit ââ 0.1 1024 4096 16384 65536 array length
Figure 3: Runtimes for different k-selection meth- ods, as a function of array length ¢. Simultaneous arrays processed are n, = 10000. k = 100 for full lines, k = 1000 for dashed lines. | 1702.08734#43 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 44 | # 6. EXPERIMENTS & APPLICATIONS
This section compares our GPU k-selection and nearest- neighbor approach to existing libraries. Unless stated other- wise, experiments are carried out on a 2Ã2.8GHz Intel Xeon E5-2680v2 with 4 Maxwell Titan X GPUs on CUDA 8.0.
# 6.1 k-selection performance
We compare against two other GPU small k-selection im- plementations: the row-based Merge Queue with Buï¬ered Search and Hierarchical Partition extracted from the fgknn library of Tang et al. [41] and Truncated Bitonic Sort (TBiS ) from Sismanis et al. [40]. Both were extracted from their re- spective exact search libraries.
We evaluate k-selection for k = 100 and 1000 of each row from a row-major matrix ng x ¢ of random 32-bit floating point values on a single Titan X. The batch size ng is fixed at 10000, and the array lengths ¢ vary from 1000 to 128000. Inputs and outputs to the problem remain resident in GPU memory, with the output being of size ng x k, with corre- sponding indices. Thus, the input problem sizes range from 40 MB (£= 1000) to 5.12 GB (¢= 128k). TBiS requires large auxiliary storage, and is limited to @ < 48000 in our tests. | 1702.08734#44 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 45 | Figure[3]shows our relative performance against TBiS and fgknn. It also includes the peak possible performance given by the memory bandwidth limit of the Titan X. The rela- tive performance of WARPSELECT over fgknn increases for larger k; even TBiS starts to outperform fgknn for larger ¢ at k = 1000. We look especially at the largest ¢ = 128000. WARPSELECT is 1.62x faster at k = 100, 2.01x at k = 1000. Performance against peak possible drops off for all imple- mentations at larger k. WARPSELECT operates at 55% of peak at k = 100 but only 16% of peak at k = 1000. This is due to additional overhead assocated with bigger thread queues and merge/sort networks for large k.
Diï¬erences from fgknn. WarpSelect is inï¬uenced by fgknn, but has several improvements: all state is maintained in registers (no shared memory), no inter-warp synchroniza- tion or buï¬ering is used, no âhierarchical partitionâ, the k- selection can be fused into other kernels, and it uses odd-size networks for eï¬cient merging and sorting.
8 | 1702.08734#45 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 46 | 8
method BIDMach [11] Ours Ours # GPUs 1 1 4 # centroids 4096 256 735 s 320 s 316 s 140 s 100 s 84 s
Table 1: MNIST8m k-means performance
# 6.2 k-means clustering
The exact search method with k = 1 can be used by a k- means clustering method in the assignment stage, to assign nq training vectors to |C1| centroids. Despite the fact that it does not use the IVFADC and k = 1 selection is trivial (a parallel reduction is used for the k = 1 case, not WarpSe- lect), k-means is a good benchmark for the clustering used to train the quantizer q1. | 1702.08734#46 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 47 | We apply the algorithm on MNIST8m images. The 8.1M images are graylevel digits in 28x28 pixels, linearized to vec- tors of 784-d. We compare this k-means implementation to the GPU k-means of BIDMach [11], which was shown to be more eï¬cient than several distributed k-means implemen- tations that require dozens of machines3. Both algorithms were run for 20 iterations. Table 1 shows that our imple- mentation is more than 2à faster, although both are built upon cuBLAS. Our implementation receives some beneï¬t from the k-selection fusion into L2 distance computation. For multi-GPU execution via replicas, the speedup is close to linear for large enough problems (3.16à for 4 GPUs with 4096 centroids). Note that this benchmark is somewhat un- realistic, as one would typically sub-sample the dataset ran- domly when so few centroids are requested.
Large scale. We can also compare to [3], an approximate CPU method that clusters 108 128-d vectors to 85k cen- troids. Their clustering method runs in 46 minutes, but re- quires 56 minutes (at least) of pre-processing to encode the vectors. Our method performs exact k-means on 4 GPUs in 52 minutes without any pre-processing. | 1702.08734#47 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 48 | # 6.3 Exact nearest neighbor search
We consider a classical dataset used to evaluate nearest neighbor search: SirT1M G5. Its characteristic sizes are £= 10°, d= 128, nq = 10. Computing the partial distance matrix Dâ costs ng x £ x d = 1.28 Tflop, which runs in less than one second on current GPUs. Figure|4]shows the cost of the distance computations against the cost of our tiling of the GEMM for the â2 (x;,y:) term of Equation |2| and the peak possible k-selection performance on the distance matrix of size nq x ¢, which additionally accounts for reading the tiled result matrix Dâ at peak memory bandwidth.
In addition to our method from Section 5, we include times from the two GPU libraries evaluated for k-selection performance in Section 6.1. We make several observations:
⢠for k-selection, the naive algorithm that sorts the full result array for each query using thrust::sort_by_key is more than 10à slower than the comparison methods;
⢠L2 distance and k-selection cost is dominant for all but our method, which has 85 % of the peak possible performance, assuming GEMM usage and our tiling
3BIDMach numbers from https://github.com/BIDData/ BIDMach/wiki/Benchmarks#KMeans | 1702.08734#48 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 49 | 3BIDMach numbers from https://github.com/BIDData/ BIDMach/wiki/Benchmarks#KMeans
-2xy SGEMM (as tiled) ââ peak possible k-select our method ââ 3 truncated bitonic sort â=â _ fgknn â B@ 25+ @ E 2h a â- ⬠2 15 256 1024
Figure 4: Exact search k-NN time for the SIFT1M dataset with varying k on 1 Titan X GPU.
of the partial distance matrix Dâ on top of GEMM is close to optimal. The cuBLAS GEMM itself has low efficiency for small reduction sizes (d = 128);
e Our fused L2/k-selection kernel is important. Our same exact algorithm without fusion (requiring an ad- ditional pass through Dâ) is at least 25% slower.
Eï¬cient k-selection is even more important in situations where approximate methods are used to compute distances, because the relative cost of k-selection with respect to dis- tance computation increases.
# 6.4 Billion-scale approximate search
There are few studies on GPU-based approximate nearest- neighbor search on large datasets (¢ >> 10°). We report a few comparison points here on index search, using standard datasets and evaluation protocol in this field. | 1702.08734#49 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 50 | SIFT1M. For the sake of completeness, we ï¬rst compare our GPU search speed on Sift1M with the implementation of Wieschollek et al. [47]. They obtain a nearest neighbor re- call at 1 (fraction of queries where the true nearest neighbor is in the top 1 result) of R@1 = 0.51, and R@100 = 0.86 in 0.02 ms per query on a Titan X. For the same time budget, our implementation obtains R@1 = 0.80 and R@100 = 0.95.
SIFT1B. We compare again with Wieschollek et al., on the Sift1B dataset [26] of 1 billion SIFT image features at nq = 104. We compare the search performance in terms of same memory usage for similar accuracy (more accurate methods may involve greater search time or memory usage). On a single GPU, with m = 8 bytes per vector, R@10 = 0.376 in 17.7 µs per query vector, versus their reported R@10 = 0.35 in 150 µs per query vector. Thus, our implementation is more accurate at a speed 8.5à faster. | 1702.08734#50 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 51 | DEEP 1B. We also experimented on the DEEP1B dataset of â¬=1 billion CNN representations for images at nq = 10°. The paper that introduces the dataset reports CPU results (1 thread): R@1=0.45 in 20 ms search time per vector. We use a PQ encoding of m = 20, with d = 80 via OPQ {17}, and |C:| = 2'*, which uses a comparable dataset storage as the original paper (20 GB). This requires multiple GPUs as it is too large for a single GPUâs global memory, so we con- sider 4 GPUs with S = 2, R = 2. We obtain a R@1 =0.4517 in 0.0133 ms per vector. While the hardware platforms are
9 | 1702.08734#51 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 52 | 9
120 o_o 4 Titan X: m=64, S=1, R=4. â+â © 100 + 4 Titan X: m=32, S=1, =~ 4 â 4 Titan X: m=16, S=1, R=4. â«â 2 sof | 3 2B 607 4 ra & © 40F 4 Da Zz = 20F 4 YFCC100M 0 1 1 1 A i i 1 O01 02 03 04 O05 06 O7 O08 09 10-intersection at 10 24 T T T T 7 7 4 Titan X: m=40, S=4, R=1 â+â i 20, S=2, 8 M40: m=40, S=4, 8 M40: m=20, S=2, R k-NN graph build time (hours) ry T 4b 4 DEEP1B Ld 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10-intersection at 10
Figure 5: Speed/accuracy trade-oï¬ of brute-force 10-NN graph construction for the YFCC100M and DEEP1B datasets.
diï¬erent, it shows that making searches on GPUs is a game- changer in terms of speed achievable on a single machine. | 1702.08734#52 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 53 | diï¬erent, it shows that making searches on GPUs is a game- changer in terms of speed achievable on a single machine.
# 6.5 The k-NN graph
An example usage of our similarity search method is to construct a k-nearest neighbor graph of a dataset via brute force (all vectors queried against the entire index).
Experimental setup. We evaluate the trade-oï¬ between speed, precision and memory on two datasets: 95 million images from the Yfcc100M dataset [42] and Deep1B. For Yfcc100M, we compute CNN descriptors as the one-before- last layer of a ResNet [23], reduced to d = 128 with PCA.
The evaluation measures the trade-oï¬ between:
⢠Speed: How much time it takes to build the IVFADC index from scratch and construct the whole k-NN graph (k = 10) by searching nearest neighbors for all vectors in the dataset. Thus, this is an end-to-end test that includes indexing as well as search time;
⢠Quality: We sample 10,000 images for which we com- pute the exact nearest neighbors. Our accuracy mea- sure is the fraction of 10 found nearest neighbors that are within the ground-truth 10 nearest neighbors. | 1702.08734#53 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 55 | Figure 6: Path in the k-NN graph of 95 million images from YFCC100M. The ï¬rst and the last image are given; the algorithm computes the smoothest path between them.
Discussion. For Yfcc100M we used S = 1, R = 4. An accuracy of more than 0.8 is obtained in 35 minutes. For Deep1B, a lower-quality graph can be built in 6 hours, with higher quality in about half a day. We also experi- mented with more GPUs by doubling the replica set, us- ing 8 Maxwell M40s (the M40 is roughly equivalent in per- formance to the Titan X). Performance is improved sub- linearly (â¼ 1.6Ã for m = 20, â¼ 1.7Ã for m = 40).
# 7. CONCLUSION
The arithmetic throughput and memory bandwidth of GPUs are well into the teraï¬ops and hundreds of gigabytes per second. However, implementing algorithms that ap- proach these performance levels is complex and counter- intuitive. In this paper, we presented the algorithmic struc- ture of similarity search methods that achieves near-optimal performance on GPUs. | 1702.08734#55 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 56 | For comparison, the largest k-NN graph construction we are aware of used a dataset comprising 36.5 million 384- d vectors, which took a cluster of 128 CPU servers 108.7 hours of compute [45], using NN-Descent [15]. Note that NN-Descent could also build or reï¬ne the k-NN graph for the datasets we consider, but it has a large memory over- head over the graph storage, which is already 80 GB for Deep1B. Moreover it requires random access across all vec- tors (384 GB for Deep1B).
The largest GPU k-NN graph construction we found is a brute-force construction using exact search with GEMM, of a dataset of 20 million 15,000-d vectors, which took a cluster of 32 Tesla C2050 GPUs 10 days [14]. Assuming computa- tion scales with GEMM cost for the distance matrix, this approach for Deep1B would take an impractical 200 days of computation time on their cluster.
# 6.6 Using the k-NN graph | 1702.08734#56 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 57 | # 6.6 Using the k-NN graph
When a k-NN graph has been constructed for an image dataset, we can ï¬nd paths in the graph between any two images, provided there is a single connected component (this is the case). For example, we can search the shortest path between two images of ï¬owers, by propagating neighbors from a starting image to a destination image. Denoting by S and D the source and destination images, and dij the distance between nodes, we search the path P = {p1, ..., pn} with p1 = S and pn = D such that
This work enables applications that needed complex ap- proximate algorithms before. For example, the approaches presented here make it possible to do exact k-means cluster- ing or to compute the k-NN graph with simple brute-force approaches in less time than a CPU (or a cluster of them) would take to do this approximately.
GPU hardware is now very common on scientiï¬c work- stations, due to their popularity for machine learning algo- rithms. We believe that our work further demonstrates their interest for database applications. Along with this work, we are publishing a carefully engineered implementation of this paperâs algorithms, so that these GPUs can now also be used for eï¬cient similarity search. | 1702.08734#57 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 58 | 8. REFERENCES [1] T. Alabi, J. D. Blanchard, B. Gordon, and R. Steinbach. Fast k-selection algorithms for graphics processing units. ACM Journal of Experimental Algorithmics, 17:4.2:4.1â4.2:4.29, October 2012.
[2] F. Andr´e, A.-M. Kermarrec, and N. L. Scouarnec. Cache locality is not enough: High-performance nearest neighbor search with product quantization fast scan. In Proc. International Conference on Very Large DataBases, pages 288â299, 2015.
[3] Y. Avrithis, Y. Kalantidis, E. Anagnostopoulos, and I. Z. Emiris. Web-scale image clustering revisited. In Proc. International Conference on Computer Vision, pages 1502â1510, 2015.
min P max i=1..n dpipi+1 , (12)
[4] A. Babenko and V. Lempitsky. The inverted multi-index. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 3069â3076, June 2012. | 1702.08734#58 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 59 | i.e., we want to favor smooth transitions. An example re- sult is shown in Figure 6 from Yfcc100M4. It was ob- tained after 20 seconds of propagation in a k-NN graph with k = 15 neighbors. Since there are many ï¬ower images in the dataset, the transitions are smooth.
[5] A. Babenko and V. Lempitsky. Improving bilayer product quantization for billion-scale approximate nearest neighbors in high dimensions. arXiv preprint arXiv:1404.1831, 2014.
[6] A. Babenko and V. Lempitsky. Eï¬cient indexing of billion-scale datasets of deep descriptors. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 2055â2063, June 2016.
4The mapping from vectors to images is not available for Deep1B
[7] R. Barrientos, J. G´omez, C. Tenllado, M. Prieto, and M. Marin. knn query processing in metric spaces using GPUs. In International European Conference on Parallel and Distributed Computing, volume 6852 of Lecture Notes
10
in Computer Science, pages 380â392, Bordeaux, France, September 2011. Springer. | 1702.08734#59 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 60 | 10
in Computer Science, pages 380â392, Bordeaux, France, September 2011. Springer.
[8] K. E. Batcher. Sorting networks and their applications. In Proc. Spring Joint Computer Conference, AFIPS â68 (Spring), pages 307â314, New York, NY, USA, 1968. ACM.
[9] P. Boncz, W. Lehner, and T. Neumann. Special issue: Modern hardware. The VLDB Journal, 25(5):623â624, 2016.
[10] J. Canny, D. L. W. Hall, and D. Klein. A multi-teraï¬op constituency parser using GPUs. In Proc. Empirical Methods on Natural Language Processing, pages 1898â1907. ACL, 2013.
[11] J. Canny and H. Zhao. Bidmach: Large-scale learning with zero memory allocation. In BigLearn workshop, NIPS, 2013.
[12] B. Catanzaro, A. Keller, and M. Garland. A decomposition for in-place matrix transposition. In Proc. ACM Symposium on Principles and Practice of Parallel Programming, PPoPP â14, pages 193â206, 2014. | 1702.08734#60 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 61 | [13] J. Chhugani, A. D. Nguyen, V. W. Lee, W. Macy, M. Hagog, Y.-K. Chen, A. Baransi, S. Kumar, and P. Dubey. Eï¬cient implementation of sorting on multi-core simd cpu architecture. Proc. VLDB Endow., 1(2):1313â1324, August 2008.
[14] A. Dashti. Eï¬cient computation of k-nearest neighbor graphs for large high-dimensional data sets on gpu clusters. Masterâs thesis, University of Wisconsin Milwaukee, August 2013.
[15] W. Dong, M. Charikar, and K. Li. Eï¬cient k-nearest neighbor graph construction for generic similarity measures. In WWW: Proceeding of the International Conference on World Wide Web, pages 577â586, March 2011.
[16] M. Douze, H. J´egou, and F. Perronnin. Polysemous codes. In Proc. European Conference on Computer Vision, pages 785â801. Springer, October 2016.
[17] T. Ge, K. He, Q. Ke, and J. Sun. Optimized product quantization. IEEE Trans. PAMI, 36(4):744â755, 2014. | 1702.08734#61 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 62 | [18] Y. Gong and S. Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 817â824, June 2011.
[19] Y. Gong, L. Wang, R. Guo, and S. Lazebnik. Multi-scale orderless pooling of deep convolutional activation features. In Proc. European Conference on Computer Vision, pages 392â407, 2014.
[20] A. Gordo, J. Almazan, J. Revaud, and D. Larlus. Deep image retrieval: Learning global representations for image search. In Proc. European Conference on Computer Vision, pages 241â257, 2016.
[21] S. Han, H. Mao, and W. J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huï¬man coding. arXiv preprint arXiv:1510.00149, 2015.
[22] K. He, F. Wen, and J. Sun. K-means hashing: An aï¬nity-preserving quantization method for learning binary compact codes. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 2938â2945, June 2013. | 1702.08734#62 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 63 | [23] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 770â778, June 2016.
[24] X. He, D. Agarwal, and S. K. Prasad. Design and implementation of a parallel priority queue on many-core architectures. IEEE International Conference on High Performance Computing, pages 1â10, 2012.
[25] H. J´egou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. IEEE Trans. PAMI, 33(1):117â128, January 2011.
[26] H. J´egou, R. Tavenard, M. Douze, and L. Amsaleg. Searching in one billion vectors: re-rank with source coding. In International Conference on Acoustics, Speech,
11
and Signal Processing, pages 861â864, May 2011.
[27] Y. Kalantidis and Y. Avrithis. Locally optimized product quantization for approximate nearest neighbor search. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 2329â2336, June 2014. | 1702.08734#63 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 64 | [28] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097â1105, 2012.
[29] F. T. Leighton. Introduction to Parallel Algorithms and Architectures: Array, Trees, Hypercubes. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1992.
[30] E. Lindholm, J. Nickolls, S. Oberman, and J. Montrym. NVIDIA Tesla: a uniï¬ed graphics and computing architecture. IEEE Micro, 28(2):39â55, March 2008. [31] W. Liu and B. Vinter. Ad-heap: An eï¬cient heap data
structure for asymmetric multicore processors. In Proc. of Workshop on General Purpose Processing Using GPUs, pages 54:54â54:63. ACM, 2014.
[32] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and | 1702.08734#64 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 65 | [32] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and
J. Dean. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111â3119, 2013. [33] L. Monroe, J. Wendelberger, and S. Michalak. Randomized
selection on the GPU. In Proc. ACM Symposium on High Performance Graphics, pages 89â98, 2011.
[34] M. Norouzi and D. Fleet. Cartesian k-means. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 3017â3024, June 2013.
[35] M. Norouzi, A. Punjani, and D. J. Fleet. Fast search in Hamming space with multi-index hashing. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 3108â3115, 2012.
[36] J. Pan and D. Manocha. Fast GPU-based locality sensitive hashing for k-nearest neighbor computation. In Proc. ACM International Conference on Advances in Geographic Information Systems, pages 211â220, 2011. | 1702.08734#65 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 66 | [37] L. Paulev´e, H. J´egou, and L. Amsaleg. Locality sensitive hashing: a comparison of hash function types and querying mechanisms. Pattern recognition letters, 31(11):1348â1358, August 2010.
[38] O. Shamir. Fundamental limits of online and distributed algorithms for statistical learning and estimation. In Advances in Neural Information Processing Systems, pages 163â171, 2014.
[39] A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. CNN features oï¬-the-shelf: an astounding baseline for recognition. In CVPR workshops, pages 512â519, 2014.
[40] N. Sismanis, N. Pitsianis, and X. Sun. Parallel search of k-nearest neighbors with synchronous operations. In IEEE High Performance Extreme Computing Conference, pages 1â6, 2012.
[41] X. Tang, Z. Huang, D. M. Eyers, S. Mills, and M. Guo. Eï¬cient selection algorithm for fast k-nn search on GPUs. In IEEE International Parallel & Distributed Processing Symposium, pages 397â406, 2015. | 1702.08734#66 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 67 | [42] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li. YFCC100M: The new data in multimedia research. Communications of the ACM, 59(2):64â73, January 2016.
[43] V. Volkov and J. W. Demmel. Benchmarking GPUs to tune dense linear algebra. In Proc. ACM/IEEE Conference on Supercomputing, pages 31:1â31:11, 2008.
[44] A. Wakatani and A. Murakami. GPGPU implementation of nearest neighbor search with product quantization. In IEEE International Symposium on Parallel and Distributed Processing with Applications, pages 248â253, 2014. [45] T. Warashina, K. Aoyama, H. Sawada, and T. Hattori.
Eï¬cient k-nearest neighbor graph construction using mapreduce for large-scale data sets. IEICE Transactions,
97-D(12):3142â3154, 2014.
[46] R. Weber, H.-J. Schek, and S. Blott. A quantitative analysis and performance study for similarity-search methods in high-dimensional spaces. In Proc. International Conference on Very Large DataBases, pages 194â205, 1998. | 1702.08734#67 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 68 | [47] P. Wieschollek, O. Wang, A. Sorkine-Hornung, and H. P. A. Lensch. Eï¬cient large-scale approximate nearest neighbor search on the GPU. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 2027â2035, June 2016.
[48] S. Williams, A. Waterman, and D. Patterson. Rooï¬ine: An insightful visual performance model for multicore architectures. Communications of the ACM, 52(4):65â76, April 2009.
Appendix: Complexity analysis of WarpSelect We derive the average number of times updates are triggered in WarpSelect, for use in Section 4.3.
Let the input to k-selection be a sequence {a1, a2, ..., ac} (1-based indexing), a randomly chosen permutation of a set of distinct elements. Elements are read sequentially in c groups of size w (the warp; in our case, w = 32); assume is a multiple of w, so c = ¢/w. Recall that t is the thread queue length. We call elements prior to or at position n in the min-k seen so far the successive min-k (at n). The likelihood that a, is in the successive min-k at n is: | 1702.08734#68 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 69 | α(n, k) := 1 k/n if n ⤠k if n > k (13)
as each an, n > k has a k/n chance as all permutations are equally likely, and all elements in the ï¬rst k qualify.
Counting the insertion sorts. In a given lane, an inser- tion sort is triggered if the incoming value is in the successive min-k + t values, but the lane has âseenâ only wc0 + (c â c0) values, where c0 is the previous won warp ballot. The prob- ability of this happening is:
α(wc0 + (c â c0), k + t) â k + t wc for c > k. (14)
The approximation considers that the thread queue has seen all the wc values, not just those assigned to its lane. The probability of any lane triggering an insertion sort is then:
1 â 1 â k + t wc â k + t c . (15)
Here the approximation is a first-order Taylor expansion. Summing up the probabilities over c gives an expected num- ber of insertions of N2 % (k + t) log(c) = O(k log(é/w)). | 1702.08734#69 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 70 | Counting full sorts. We seck N3 = m(é,k,t,w), the ex- pected number of full sorts required for WARPSELECT.
Single lane. For now, we assume w = 1, soc = @. Let (l,m, k) be the probability that in an sequence {a1,..., ac}, exactly m of the elements as encountered by a sequential scanner (w = 1) are in the successive min-k. Given m, there are ({) places where these successive min-k elements can occur. It is given by a recurrence relation:
1 £=0andm=0 0 £=Oandm>0 y(é,m,k) = 40 £>0Oandm=0 (y(@-â1,mâ1,k)-a(â¬,k)+ y(â1,m,k)-(1âa(é,k))) otherwise. (16) >
12
The last case is the probability of: there is a ¢â 1 se- quence with m â 1 successive min-k elements preceding us, and the current element is in the successive min-k, or the current element is not in the successive min-k, m ones are before us. We can then develop a recurrence relationship for m(l,k,t, 1). Note that | 1702.08734#70 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 71 | min((bt-+max(0,tâ1)),0) 6(6,b,k,t) := y(é,m,k) (17) m=bt
for b where 0 < bt < @ is the fraction of all sequences of length @ that will force b sorts of data by winning the thread queue ballot, as there have to be bt to (bt + max(0,t â 1)) elements in the successive min-k for these sorts to happen (as the min-k elements will overflow the thread queues). There are at most |¢/t| won ballots that can occur, as it takes t separate sequential current min-k seen elements to win the ballot. m(¢,k,t,1) is thus the expectation of this over all possible b:
Le/t) m(l,k,t,1) = S> d- 4(6,6,k, t). b=1 (18)
This can be computed by dynamic programming. Analyti- cally, note that for t = 1, k = 1, m(â¬,1,1,1) is the harmonic number Hy = 1+4+4+...+4, which converges to In(¢) +7 (the Euler-Mascheroni constant y) as £ > oo. | 1702.08734#71 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 72 | Fort =1,k > 1,¢> k, w(é,k,1,1) = k+k(He â Ae) or O(klog(é)), as the first k elements are in the successive min-k, and the expectation for the rest is ma + ms +. s.
Fort > 1,k > 1,£> k, note that there are some number D,k < D < £ of successive min-k determinations D made for each possible {a1,...,a¢}. The number of won ballots for each case is by definition |D/t], as the thread queue must fill up t times. Thus, 7(¢,k,t, 1) = O(k log(é)/t).
Multiple lanes. The w > 1 case is complicated by the fact that there are joint probabilities to consider (if more than one of the w workers triggers a sort for a given group, only one sort takes place). However, the likelihood can be bounded. Let 7â(¢,k,t,w) be the expected won ballots as- suming no mutual interference between the w workers for winning ballots (i.e., we win b ballots if there are b < w workers that independently win a ballot at a single step), but with the shared min-k set after each sort from the joint sequence. Assume that k > w. Then: | 1702.08734#72 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08734 | 73 | - [e/w]â[k/wl w'(â¬,k,1,w) <u( «| + » aera) â¬/w),k,1,1) = O(wk log(£/w)) < wa( (19)
(19) where the likelihood of the w workers seeing a successive min-k element has an upper bound of that of the first worker at each step. As before, the number of won ballots is scaled by t, so 1'(â¬,k,t, w) = O(wk log(é/w)/t). Mutual interfer- ence can only reduce the number of ballots, so we obtain the same upper bound for 7(é, k, t, w). | 1702.08734#73 | Billion-scale similarity search with GPUs | Similarity search finds application in specialized database systems handling
complex data such as images or videos, which are typically represented by
high-dimensional features and require specific indexing structures. This paper
tackles the problem of better utilizing GPUs for this task. While GPUs excel at
data-parallel tasks, prior approaches are bottlenecked by algorithms that
expose less parallelism, such as k-min selection, or make poor use of the
memory hierarchy.
We propose a design for k-selection that operates at up to 55% of theoretical
peak performance, enabling a nearest neighbor implementation that is 8.5x
faster than prior GPU state of the art. We apply it in different similarity
search scenarios, by proposing optimized design for brute-force, approximate
and compressed-domain search based on product quantization. In all these
setups, we outperform the state of the art by large margins. Our implementation
enables the construction of a high accuracy k-NN graph on 95 million images
from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion
vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced
our approach for the sake of comparison and reproducibility. | http://arxiv.org/pdf/1702.08734 | Jeff Johnson, Matthijs Douze, Hervé Jégou | cs.CV, cs.DB, cs.DS, cs.IR | null | null | cs.CV | 20170228 | 20170228 | [
{
"id": "1510.00149"
}
] |
1702.08138 | 0 | 7 1 0 2
# b e F 7 2
]
# G L . s c [
1 v 8 3 1 8 0 . 2 0 7 1 : v i X r a
# Deceiving Googleâs Perspective API Built for Detecting Toxic Comments
Hossein Hosseini, Sreeram Kannan, Baosen Zhang and Radha Poovendran Network Security Lab (NSL), Department of Electrical Engineering, University of Washington, Seattle, WA Email: {hosseinh, ksreeram, zhangbao, rp3}@uw.edu
AbstractâSocial media platforms provide an environment where people can freely engage in discussions. Unfortunately, they also enable several problems, such as online harass- ment. Recently, Google and Jigsaw started a project called Perspective, which uses machine learning to automatically detect toxic language. A demonstration website has been also launched, which allows anyone to type a phrase in the interface and instantaneously see the toxicity score [1]. | 1702.08138#0 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
1702.08138 | 1 | In this paper, we propose an attack on the Perspective toxic detection system based on the adversarial examples. We show that an adversary can subtly modify a highly toxic phrase in a way that the system assigns signiï¬cantly lower toxicity score to it. We apply the attack on the sample phrases provided in the Perspective website and show that we can consistently reduce the toxicity scores to the level of the non-toxic phrases. The existence of such adversarial examples is very harmful for toxic detection systems and seriously undermines their usability.
AI to help with providing a safe environment for online discussions [10].
Perspective is an API that enables the developers to use the toxic detector running on Googleâs servers, to identify harassment and abuse on social media or more efï¬ciently ï¬ltering invective from the comments on a news website. Jigsaw has partnered with online communities and publishers, such as Wikipedia [3] and The New York Times [11], to implement this toxicity measurement system.
Recently, a demonstration website has been launched, which allows anyone to type a phrase in the Perspectiveâs interface and instantaneously see how it rates on the âtoxicityâ scale [1]. The Perspective website has also open sourced the experiments, models and research data in order to explore the strengths and weaknesses of using machine learning as a tool for online discussion.
# I. INTRODUCTION | 1702.08138#1 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
1702.08138 | 2 | # I. INTRODUCTION
Social media platforms provide an environment where peo- ple can learn about the trends and news, freely share their opinions and engage in discussions. Unfortunately, the lack of a moderating entity in these platforms has caused several problems, ranging from the wide spread of fake news to online harassment [2]. Due to the growing concern about the impact of online harassment on the peopleâs experience of the Internet, many platforms are taking steps to enhance the safety of the online environments [3], [4].
Some of the platforms employ approaches such as reï¬ning the information based on crowdsourcing (upvotes/downvotes), turning off comments or manual moderation to mitigate the effect of the inappropriate contents [5]. These approaches however are inefï¬cient and not scalable. As a result, there has been many calls for researchers to develop methods to automatically detect abusive or toxic context in the real time [6].
Recent advances in machine learning have transformed many domains such as computer vision [7], speech recogni- tion [8], and language processing [9]. Many researchers have explored using machine learning to also tackle the problem of online harassment. Recently, Google and Jigsaw launched a project called Perspective [1], which uses machine learning to automatically detect online insults, harassment, and abusive speech. The system intends to bring Conversation | 1702.08138#2 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
1702.08138 | 3 | The implicit assumption of learning models is that they will be deployed in benign settings. However, many works have pointed out their vulnerability in adversarial scenarios [12]â [14]. One type of the vulnerabilities of machine learning algorithms is that an adversary can change the algorithm output by subtly perturbing the input, often unnoticeable by humans. Such inputs are called adversarial examples [15], and have been shown to be effective against different machine learning algorithms even when the adversary has only a black- box access to the target model [16].
In this paper, we demonstrate the vulnerability of the recently-released Googleâs Perspective system against the adversarial examples. In the text classiï¬cation task of the Perspective, adversarial examples can be deï¬ned as mod- iï¬ed texts which contain the same highly abusive language as the original text, yet receive a signiï¬cantly lower toxicity score from the learning model. Through different experiments, we show that an adversary can deceive the system by misspelling the abusive words or by adding punctuations between the let- ters. The existence of adversarial examples is very harmful for toxic detector systems and seriously undermines their usability, especially since these systems are likely to be employed in adversarial settings. We conclude the paper by proposing some countermeasures to the proposed attack.
# II. BACKGROUND
A. Brief Description of Googleâs Perspective API | 1702.08138#3 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
1702.08138 | 4 | # II. BACKGROUND
A. Brief Description of Googleâs Perspective API
This work was supported by ONR grants N00014-14-1-0029 and N00014- 16-1-2710, ARO grant W911NF-16-1-0485 and NSF grant CNS-1446866.
Perspective is an API created by Jigsaw and Googleâs Counter Abuse Technology team in Conversation-AI. Conver- sation AI is a collaborative research effort exploring ML as a
1
TABLE I: Demosntration of the Attack on the Perspective Toxic Detection System . All phrases in the ï¬rst column of the table are chosen from the examples provided by the Perspective website [1]. | 1702.08138#4 | Deceiving Google's Perspective API Built for Detecting Toxic Comments | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability. | http://arxiv.org/pdf/1702.08138 | Hossein Hosseini, Sreeram Kannan, Baosen Zhang, Radha Poovendran | cs.LG, cs.CY, cs.SI | 4 pages | null | cs.LG | 20170227 | 20170227 | [
{
"id": "1606.04435"
},
{
"id": "1602.02697"
},
{
"id": "1610.08914"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.