doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2309.09013
18
111:5 111:6 Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty the current threshold (i.e., minimum of scores in the current top-𝑘 set), then that document should be fully evaluated; otherwise, it has no prospect of ever making it to the top-𝑘 set and can therefore be safely rejected. As articulated elsewhere [16], the logic above is effective when vectors have very specific properties: non-negativity, asymmetricly higher sparsity rate in queries, and a Zipfian distribution of the length of inverted lists. It should be noted that these assumptions are true of relevance measures such as BM25 [62]; sparse MIPS algorithms were designed for text distributions after all. The limitations of existing algorithms render them inefficient for the general case of sparse MIPS, where vectors may be real-valued and whose sparsity rate is closer to uniform across dimensions. That is because, coordinate upper-bounds become more uniform, leading to less effective pruning of the inverted lists. That, among other problems [16, 18], renders the particular dynamic pruning strategy in MaxScore and WAND ineffective, as demonstrated empirically in the past [16, 48].
2309.09013#18
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
19
Signatures for Logical Queries. There are alternatives to the inverted index, however, such 2.1.2 as the use of signatures for retrieval and sketches for inner product approximation [27, 61, 70]. In this class of algorithms, Goodwin et al. [27] describe the BitFunnel indexing machinery. BitFunnel stores a bit signature for every document vector in the index using Bloom filters. These signatures are scanned during retrieval to deduce if a document contains the terms of a conjunctive query. While it is encouraging that a signature-based replacement to inverted indexes appears not only viable but very much practical, the query logic BitFunnel supports is limited to logical ANDs and does not generalize to the setup we are considering in this work. Pratap et al. considered a simple algorithm [61] to sketch sparse binary vectors so that the inner product of sketches approximates the inner product of original vectors. They do so by randomly projecting each coordinate in the original space to coordinates in the sketch. When two or more non-zero coordinates collide, the sketch records their logical OR. While a later work extends this idea to categorical-valued vectors [70], it is not obvious how the proposed sketching mechanisms may be extended to real-valued vectors.
2309.09013#19
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
20
2.1.3 General Sparse MIPS. The most relevant work to ours is the recent study of general sparse MIPS by Bruch et al. [16]. Building on random projections, the authors proposed a sketching algorithm, dubbed Sinnamon, that embeds sparse vectors into a low-dimensional sparse subspace. Sinnamon, as with the previous approach, randomly projects coordinates from the original space to the sketch space. But the sketch space is a union of two subspaces: One that records the upper- bound on coordinate values and another that registers the lower-bound instead. It was shown that reconstructing a sparse vector from the sketch approximates inner product with any arbitrary query with high accuracy. Bruch et al. [16] couple the sketches with an inverted index, and empirically evaluate a coordinate- at-a-time algorithm for sparse MIPS. They show considerable compression rate in terms of the size of the index as well as latencies that are sometimes an order of magnitude better than WAND on embedding vectors produced by Splade [24, 25].
2309.09013#20
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
21
2.2 Dense MIPS Let us note that there exists an extremely vast body of works on approximate nearest neighbor (ANN) search that is in and of itself an interesting area of research. Strictly speaking, however, MIPS is a fundamentally different (and, in fact, a much harder) problem because inner product is not a proper metric; in fact, maximum cosine similarity search and ANN with Euclidean distance are special cases of MIPS. In spite of this, many MIPS solutions for dense vectors adapt ANN solutions to inner product, often without any theoretical justification. # Bridging Dense and Sparse Maximum Inner Product Search Consider, for example, the family of MIPS solutions that is based on proximity graphs such as IP-NSW [55] and its many derivatives [42, 65, 81]. These classes of algorithms construct a graph where each data point is a node in the graph and two nodes are connected if they are deemed “similar.” Typically, similarity is based on Euclidean distance. But the authors of [55] show that when one uses inner product (albeit improperly) to construct the graph, the resulting structure is nonetheless capable of finding the maximizers of inner product rather quickly and accurately.
2309.09013#21
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
22
Graph-based methods may work well but they come with two serious issues. First, while we can reason about their performance in the Euclidean space, we can say very little about why they do or do not work for inner product, and under what conditions they may fail. It is difficult, for example, to settle on a configuration of hyperparameters without conducting extensive experiments and evaluation on a validation dataset. The second and even more limiting challenge is the poor scalability and slow index construction of graph methods. Another family of MIPS algorithms can best be described as different realizations of Locality Sensitive Hashing (LSH) [29, 30, 43, 56, 63, 64, 74, 77]. The idea is to project data points such that “similar” points are placed into the same “bucket.” Doing so enables sublinear search because, during retrieval, we limit the search to the buckets that collide with the query.
2309.09013#22
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
23
Many LSH methods for MIPS transform the problem to Euclidean or angular similarity search first, in order to then recycle existing hash functions. One of the main challenges with this way of approaching MIPS is that inner product behaves oddly in high dimensions, in a way that is different from, say, Euclidean distance: the maximum inner product between vectors is typically much smaller than the average vector norm. Making LSH-based MIPS accurate requires an increasingly larger number of projections, which leads to an unreasonable growth in index size [67]. Another method that is borrowed from the ANN literature is search using an inverted file (IVF). This method takes advantage of the geometrical structure of vectors to break a large collection into smaller partitions. Points within each partition are expected to result in a similar inner product with an arbitrary query point—though there are no theoretical guarantees that that phenomenon actually materializes. Despite that, clustering-based IVF is a simple and widely-adopted technique [31, 32], and has been shown to perform well for MIPS [8]. Its simplicity and well-understood behavior are the reasons we study this particular technique in this work.
2309.09013#23
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
24
Finally, in our review of the dense MIPS literature, we exclusively described space partitioning algorithms that reduce the search space through some form of partitioning or hashing, or by organizing vectors in a graph structure and traversing the edges towards the nearest neighbors of a given query. It should be noted, however, that the other and often critical aspect of MIPS is the actual computation of inner product. There are many works that address that particular challenge often via quantization (see [28] and references therein) but that are beyond the scope of this article. 3 NOTATION AND EXPERIMENTAL SETUP We begin by laying out our notation and terminology. Furthermore, throughout this work, we often interleave theoretical and empirical analysis. To provide sufficient context for our arguments, this section additionally gives details on our empirical setup and evaluation measures.
2309.09013#24
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
25
3.1 Notation Suppose we have a collection X ⊂ R𝑚+𝑁 of possibly hybrid vectors. That means, if 𝑥 ∈ X, then 𝑥 is a vector that is comprised of an 𝑚-dimensional dense, an 𝑁 -dimensional sparse array of coordinates, where dense and sparse are as defined in Section 1. We abuse terminology and call the dense part of 𝑥 its “dense vector” and denote it by 𝑥𝑑 ∈ R𝑚. Similarly, we call the sparse part, 𝑥𝑠 ∈ R𝑁 , its “sparse vector.” We can write 𝑥 = 𝑥𝑑 ⊕ 𝑥𝑠 , where ⊕ denotes concatenation. 111:7 111:8 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty Table 1. Datasets of interest along with select statistics. The rightmost two columns report the average number of non-zero entries in documents and, in parentheses, queries for sparse vector representations of the datasets.
2309.09013#25
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
26
Dataset Document Count Query Count Splade Efficient Splade MS Marco Passage NQ Quora HotpotQA Fever DBPedia 8.8M 2.68M 523K 5.23M 5.42M 4.63M 6,980 3,452 10,000 7,405 6,666 400 127 (49) 153 (51) 68 (65) 131 (59) 145 (67) 134 (49) 185 (5.9) 212 (8) 68 (8.9) 125 (13) 140 (8.6) 131 (5.9) The delineation above will prove helpful later when we discuss the status quo and our proposal within one mathematical framework. Particularly, we can say that a sparse retrieval algorithm operates on the sparse collection X𝑠 = {𝑥𝑠 | 𝑥 = 𝑥𝑑 ⊕ 𝑥𝑠 ∈ X}, and similarly dense retrieval algorithms operate on X𝑑 , defined symmetrically. Hybrid vectors collapse to dense vectors when 𝑁 = 0 (or when 𝑥𝑠 = 0 for all 𝑥 ∈ X), and reduce to sparse vectors when 𝑚 = 0 (or 𝑥𝑑 = 0 ∀𝑥 ∈ X).
2309.09013#26
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
28
We write 𝑛𝑧 (𝑢) for the set of non-zero coordinates in a sparse vector, 𝑛𝑧 (𝑢) = {𝑖 | 𝑢𝑖 ≠ 0}, and denote the average number of non-zero coordinates with 𝜓 = E[|𝑛𝑧 (𝑋 )|] for a random vector 𝑋 . We denote coordinate 𝑖 of a vector 𝑢 using subscripts: 𝑢𝑖 . To refer to the 𝑗-th vector in a collection of vectors, we use superscripts: 𝑢 ( 𝑗 ) . We write ⟨𝑢, 𝑣⟩ to express the inner product of two vectors 𝑢 and 𝑣. We denote the set of consecutive natural numbers {1, 2, . . . , 𝑚} by [𝑚] for brevity. Finally, we reserve capital letters to denote random variables (e.g., 𝑋 ) and calligraphic letters for sets (e.g., X).
2309.09013#28
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
29
3.2 Experimental Configuration 3.2.1 Datasets. We perform our empirical analysis on a number of publicly available datasets, summarized in Table 1. The largest dataset used in this work is the MS Marco3 Passage Retrieval v1 dataset [57], a retrieval and ranking collection from Microsoft. It consists of about 8.8 million short passages which, along with queries in natural language, originate from Bing. The queries are split into train, dev, and eval non-overlapping subsets. We use the small dev query set (consisting of 6,980 queries) in our analysis. We also experiment with 5 datasets from the BeIR [66] collection4: Natural Questions (NQ, question answering), Quora (duplicate detection), HotpotQA (question answering), Fever (fact extraction), and DBPedia (entity search). For a more detailed description of each dataset, we refer the reader to [66]. # 3Available at https://microsoft.github.io/msmarco/ 4Available at https://github.com/beir-cellar/beir # Bridging Dense and Sparse Maximum Inner Product Search
2309.09013#29
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
30
# Bridging Dense and Sparse Maximum Inner Product Search Sparse Vectors. We convert the datasets above into sparse vectors by using Splade [24] and 3.2.2 Efficient Splade [38]. Splade5 [24] is a deep learning model that produces sparse representations for text. The vectors have roughly 30,000 dimensions, where each dimension corresponds to a term in the BERT [20] WordPiece [76] vocabulary. Non-zero entries in a vector reflect learnt term importance weights. Splade representations allow us to test the behavior of our algorithm on query vectors with a large number of non-zero entries. However, we also create another set of vectors using a more efficient variant of Splade, called Efficient Splade6 [38]. This model produces queries that have far fewer non-zero entries than the original Splade model, but documents that may have a larger number of non-zero entries. These two models give us a range of sparsity rates to work with and examine our algorithms on. As a way to compare and contrast the more pertinent properties of the learnt sparse representations, Table 1 shows the differences in the sparsity rate of the two embedding models for all datasets considered in this work.
2309.09013#30
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
31
3.2.3 Evaluation. Our main metric of interest is the accuracy7 of approximate algorithms, mea- sured as follows: For every test query, we obtain the exact solution to MIPS by exhaustively searching over the entire dataset. We then obtain approximate set of top-𝑘 documents using a system of interest. Accuracy is then measured as the ratio of exact documents that are present in the approximate set. This metric helps us study the impact of the different sources of error. We also report throughput as queries per second (QPS) in a subset of our experiments where efficiency takes center stage. When computing QPS, we include the time elapsed from the moment query vectors are presented to the algorithm to the moment the algorithm returns the requested top 𝑘 document vectors for all queries—we emphasize that the algorithms used in this work do not operate in batch mode. We note that, because this work is a study of retrieval of vectors, we do not factor into throughput the time it takes to embed a given piece of text. 3.2.4 Hardware and Code. We conduct experiments on a commercially available platform with an Intel Xeon Platinum 8481C Processor (Sapphire Rapids) with a clock rate of 1.9GHz, 20 virtual CPUs (2 vCPUs per physical core), and 44GB of main memory. This setup represents a typical server in a production environment—in fact, we rented this machine from the Google Cloud Platform.
2309.09013#31
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
32
We further note that, we implemented all the methods discussed in this work in the Rust programming language. We rely on the Rust compiler for any platform-specific optimization and do not otherwise optimize the code for the Intel platform (such as by developing SIMD code). 4 ANALYSIS OF RANDOM PROJECTIONS FOR SPARSE VECTORS As noted earlier, the historical bifurcation of the retrieval machinery can, in no small part, be attributed to the differences between sparse and dense vectors—in addition to the application domain. For example, sparse vectors are plagued with a much more serious case of the curse of dimensionality. In extremely high-dimensional spaces where one may have thousands to millions of dimensions, the geometrical properties and probabilistic certainty that power clustering start to break down. So does our intuition of the space.
2309.09013#32
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
33
5Pre-trained checkpoint from HuggingFace available at https://huggingface.co/naver/splade-cocondenser-ensembledistil 6Pre-trained checkpoints for document and query encoders were obtained from https://huggingface.co/naver/efficient- splade-V-large-doc and https://huggingface.co/naver/efficient-splade-V-large-query, respectively 7What we call “accuracy” in this work is also known as “recall” in the ANN literature. However, “recall” is an overloaded term in the IR literature as it also refers to the portion of relevant documents returned for a query. We use “accuracy” instead to avoid that confusion. 111:9 111:10 Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty The high dimensionality of sparse vectors poses another challenge: greater computation required to perform basic operations. While optimized implementations (see, e.g., [35] and references therein) of spherical KMeans exist for sparse vectors, for example, their efficiency nonetheless grows with the number of dimensions. Standard KMeans is even more challenging: Cluster centroids are likely to be high-dimensional dense vectors, leading to orders of magnitude more computation to perform cluster assignments in each iteration of the algorithm.
2309.09013#33
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
34
These difficulties—computational complexity and geometrical oddities—pose a fundamental challenge to clustering over sparse vectors. That leads naturally to dimensionality reduction, and in particular sketching [73]: Summarizing a high-dimensional vector into a lower-dimensional space such that certain properties, such as the distance between points or inner products, are preserved with some quantifiable error. The reason sketching is appealing is that the mathematics behind it offer guarantees in an oblivious manner: with no further assumptions on the source and nature of the vectors themselves or their distribution. Additionally, sketching a vector is often fast since it is a requisite for their application in streaming algorithms. Finally, the resulting sketch in a (dense and) low-dimensional space facilitates faster subsequent computation in exchange for a controllable error. In this work, we explore two such sketching functions (𝜙 (·) in the notation of Algorithm 1): One classical result that has powered much of the research on sketching is the linear Johnson- Lindenstrauss (JL) transform [33], which produces dense sketches of its input and enables computing an unbiased estimate of inner product (or Euclidean distance). Another, is the non-linear Sinnamon function [16] that produces sparse sketches of its input that enable deriving upper-bounds on inner product.
2309.09013#34
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
35
In the remainder of this section, we review these two algorithms in depth and compare and contrast their performance. Importantly, we consider the approximation error in isolation: How does sketching affect MIPS if our MIPS algorithm itself were exact? In other words, if we searched exhaustively for the top 𝑘 maximizers of inner product with a query, what accuracy may be expect if that search were performed on sketches of vectors versus the original vectors? 4.1 The Johnson-Lindenstrauss Transform 4.1.1 Review. Let us repeat the result due to Johnson and Lindenstrauss [33] for convenience: Lemma 4.1 (Johnson-Lindenstrauss). For 0 < 𝜖 < 1 and any set V of |V | points in R𝑁 , and an integer 𝑛 = Ω(𝜖 −2 ln |V |), there exists a Lipschitz mapping 𝑓 : R𝑁 → R𝑛 such that (1 − 𝜖)∥𝑢 − 𝑣 ∥2 2 ≤ ∥𝑓 (𝑢) − 𝑓 (𝑣)∥2 2 ≤ (1 + 𝜖)∥𝑢 − 𝑣 ∥2 2, for all 𝑢, 𝑣 ∈ V.
2309.09013#35
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
36
for all 𝑢, 𝑣 ∈ V. This result has been extensively studied and further developed since its introduction. Using simple proofs, for example, it can be shown that the mapping 𝑓 may be a linear transformation by an 𝑛 × 𝑁 random matrix Φ drawn from a certain class of distributions. Such a matrix Φ is said to form a JL transform [73]. There are many constructions of Φ that form a JL transform. It is trivial to show that when the entries of Φ are independently drawn from N (0, 1 𝑛 ), then Φ is a JL transform with parameters (𝜖, 𝛿, 𝜃 ) if 𝑛 = Ω(𝜖 −2 ln(𝜃 /𝛿)). Φ = 1 𝑅, where 𝑅𝑛×𝑁 is a matrix whose entries are independent √ 𝑛 Rademacher random variables, is another simple-to-prove example of a JL transform. The literature offers a large number of other, more efficient constructions such as the Fast JL Transform [1], as well as specific theoretical results for sparse vectors (e.g., [10]). We refer the interested reader to [73] for an excellent survey of these results.
2309.09013#36
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
37
Bridging Dense and Sparse Maximum Inner Product Search 4.1.2 Theoretical Analysis. In this work, we are interested in the transformation in the context of inner product rather than the ℓ2 norm and Euclidean distance. Let us take 𝜙 (𝑢) = 𝑅𝑢, with 𝑛}𝑛×𝑁 , as one candidate sketching function in Algorithm 1 and state the following 𝑅 ∈ {−1/ results for our particular construction: Theorem 4.2. Fix two vectors 𝑢 and 𝑣 ∈ R𝑁 . Define 𝑍Sketch = ⟨𝜙 (𝑢), 𝜙 (𝑣)⟩ as the random variable representing the inner product of sketches of size 𝑛, prepared using the projection 𝜙 (𝑢) = 𝑅𝑢, with √ 𝑛}𝑛×𝑁 being a random Rademacher matrix. 𝑍Sketch is an unbiased estimator of 𝑅 ∈ {−1/ ⟨𝑢, 𝑣⟩. Its distribution tends to a Gaussian with variance:
2309.09013#37
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
38
1 ~ (Ilull3lloll; + (u, 0)? — 2)” ufo?) (2) We give our proof of the claim above in Appendix A. We next make the following claim for a fixed query vector 𝑞 and a random document vector, thereby taking it a step closer to the MIPS setup. We present a proof in Appendix B. Theorem 4.3. Fix a query vector 𝑞 ∈ R𝑁 and let 𝑋 be a random vector drawn according to the following probabilistic model. Coordinate 𝑖, 𝑋𝑖 , is non-zero with probability 𝑝𝑖 > 0 and, if it is non- zero, draws its value from a distribution with mean 𝜇 and variance 𝜎 2. 𝑍Sketch = ⟨𝜙 (𝑞), 𝜙 (𝑋 )⟩, with 𝜙 (𝑢) = 𝑅𝑢 and 𝑅 ∈ {−1/ has expected value p>); pigi and variance: =[(u? + 0°) (lla >” ps ~ Dy pia) + 4° (Dap)? ~ (aie?) ]- (3)
2309.09013#38
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
39
=[(u? + 0°) (lla >” ps ~ Dy pia) + 4° (Dap)? ~ (aie?) ]- (3) Consider the special case where p; = //N for some constant y for all dimensions i. Further assume, without loss of generality, that the (fixed) query vector has unit norm: ||q||z = 1. It can be observed that the variance of Zsxx1c1 decomposes into a term that is (u? + o”)(1- 1/N)W/n, anda second term that is a function of 1/N*. The mean is a linear function of the non-zero coordinates in the query: (1); qi)/N. As N grows, the mean of Zgxercu tends to 0 at a rate proportional to the sparsity rate (y/N), while its variance tends to (yu? + 07) //n.
2309.09013#39
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
40
The analysis above suggests that the ability of 𝜙 (·), as defined in this section, to preserve the inner product of a query vector with a randomly drawn document vector deteriorates as a function of the number of non-zero coordinates. For example, when the number of non-zero coordinates becomes larger, ⟨𝜙 (𝑞), 𝜙 (𝑋 )⟩ for a fixed query 𝑞 and a random vector 𝑋 becomes less reliable because the variance of the approximation increases. Nonetheless, as we see later in this work, the degree of noise is often manageable in practice as evidenced by the accuracy of Algorithm 2. 4.2 The Sinnamon Transform 4.2.1 Review. Like JL transform, Sinnamon [16] aims to reduce the dimensionality of (sparse) vectors. Unlike JL transform, it does so through a non-linear mapping.
2309.09013#40
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
41
Sinnamon uses half the sketch to record upper-bounds on the values of non-zero coordinates in a vector, and the other half to register lower-bounds. For notational convenience, let us assume that the sketch size is 𝑛 = 2𝑚. Given a vector 𝑢 ∈ R𝑁 and ℎ independent random mappings 𝜋𝑜 : [𝑁 ] → [𝑚] (1 ≤ 𝑜 ≤ ℎ), Sinnamon constructs the upper-bound sketch 𝑢 ∈ R𝑚 where its 𝑘-th coordinate is assigned the following value: 𝑢𝑘 ← max {𝑖 ∈𝑛𝑧 (𝑢 ) | ∃ 𝑜 s.t. 𝜋𝑜 (𝑖 )=𝑘 } 𝑢𝑖 . (4) The lower-bound sketch, 𝑢, is filled in a symmetric manner, in the sense that the algorithmic procedure is the same but the operator changes from max(·) to min(·). 111:11 111:12 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty
2309.09013#41
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
42
111:11 111:12 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty Computing the inner product between a query vector 𝑞 ∈ R𝑁 and a vector 𝑢 given its sketch (𝜙 (𝑢) = 𝑢 ⊕ 𝑢) uses the following procedure: Positive query values are multiplied by the least upper-bound from 𝑢, and negative query values by the greatest lower-bound from 𝑢: ∑︁ 1 i(1 min u,+1 max u,). 5 y ienz(w 4i(1 q.>0 ke{xoli) 1<o<h} § !~° pe(ng(i) 1<0<h} ur) (6)
2309.09013#42
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
43
The indicator 1𝑖 ∈𝑛𝑧 (𝑢 ) , which is kept in conjunction with the sketch, guarantees that the partial inner product between a query coordinate 𝑞𝑖 and the sketch of a document vector (i.e., individual summands in Equation (5)) is 0 if 𝑖 ∉ 𝑛𝑧 (𝑢). That pairing of the sketch with the indicator function improves the bound on error dramatically while maintaining a large compression rate. For formal results on the probability of the inner product error, we refer the reader to the original work [16]. 4.2.2 Theoretical Analysis. In this work, we use a simplified instance of Sinnamon, which we call Weak Sinnamon, by (a) setting the number of random mappings to 1, which we denote by 𝜋; and (b) removing 1𝑖 ∈𝑛𝑧 (𝑢 ) from the inner product computation. These two reductions have important side effects that ultimately enable us to apply existing clustering algorithms and compute inner product between vectors. Let us focus on the upper-bound sketch to illustrate these differences; similar arguments can be made for the lower-bound sketch. First, notice that the upper-bound sketch of a document vector simplifies to 𝑢 where:
2309.09013#43
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
44
𝑢𝑘 ← max {𝑖 ∈𝑛𝑧 (𝑢 ) | 𝜋 (𝑖 )=𝑘 } 𝑢𝑖, (6) # 𝑢𝑘 ← and that the upper-bound sketch of a query vector, 𝑞, becomes: ∑︁ 𝑞𝑘 ← 𝑞𝑖 . {𝑖 ∈𝑛𝑧 (𝑞) | 𝜋 (𝑖 )=𝑘 ∧ 𝑞𝑖 >0} (7) We denote the former by 𝜙𝑑 (·) (for document) and the latter by 𝜙𝑞 (·) (for query). Second, the inner product computation between the sketches of query and document vectors reduces to: ∑︁ ∑︁
2309.09013#44
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
45
Second, the inner product computation between the sketches of query and document vectors reduces to: ∑︁ ∑︁ ⟨𝜙𝑞 (𝑞), 𝜙𝑑 (𝑢)⟩ = ⟨𝑞, 𝑢⟩ + ⟨𝑞, 𝑢⟩ = 𝑞𝑖𝑢𝜋 (𝑖 ) + 𝑞𝑖𝑢𝜋 (𝑖 ) . 𝑖: 𝑞𝑖 >0 𝑖: 𝑞𝑖 <0 (8) We now extend the analysis in [16] to the setup above. We begin by stating the following claim that is trivially true: Theorem 4.4. For a query vector 𝑞 and document vector 𝑢, ⟨𝑞, 𝑢⟩ ≤ ⟨𝜙𝑞 (𝑞), 𝜙𝑑 (𝑢)⟩. Importantly, the inner product between query and document sketches is not an unbiased esti- mator of the inner product between the original vectors. Let us now model the probability of the approximation error.
2309.09013#45
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
46
Consider the upper-bound sketch first. Using a similar argument to Theorem 5.4 of [16], we state the following result and provide a proof in Appendix C: THEOREM 4.5. Let X be a random vector drawn according to the following probabilistic model. Coordinate i, X;, is non-zero with probability p; > 0 and, if it is non-zero, draws its value from a distribution with PDF ¢ and CDF ®. Then: PIX qq) —X) $5] © (1— pi) (erm EPO) Zar) +p [ eo HUM) Dye Pig (aida (9) ∫ PIX qq) —X) $5] © (1— pi) (erm EPO) Zar) +p [ eo HUM) Dye Pig (aida (9) A symmetric argument can be made for the error of the lower-bound sketch. Crucially, given the result above, which formalizes the CDF of the sketching approximation error, we can obtain the expected value and variance of the random variables 𝑋 𝜋 (𝑖 ) − 𝑋𝑖 and 𝑋 𝜋 (𝑖 ) − 𝑋𝑖 for all dimensions 𝑖. # Bridging Dense and Sparse Maximum Inner Product Search
2309.09013#46
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
47
# Bridging Dense and Sparse Maximum Inner Product Search From there, and following similar arguments as the proof of Theorem 5.8 of [16], it is easy to show that the approximation error takes on a Gaussian distribution with mean: ∑︁ ∑︁ 𝑞𝑖 E[𝑋 𝜋 (𝑖 ) − 𝑋𝑖 ] + 𝑞𝑖 E[𝑋 𝜋 (𝑖 ) − 𝑋𝑖 ] 𝑖: 𝑞𝑖 >0 𝑖: 𝑞𝑖 <0 and variance that is: ∑︁ ∑︁ 𝑞2 𝑖 Var [𝑋 𝜋 (𝑖 ) − 𝑋𝑖 ] + 𝑞2 𝑖 Var [𝑋 𝜋 (𝑖 ) − 𝑋𝑖 ]. 𝑖: 𝑞𝑖 >0 𝑖: 𝑞𝑖 <0
2309.09013#47
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
48
Let us illustrate the implications of Theorem 4.5 by considering the special case where p; = y//N for all dimensions i. As the sparsity rate increases and N grows, the second term in Equation (9) tends to 0 at a rate proportional to //N, while the first term dominates, tending approximately to exp ( — (1 — ©(5))/m). By making y//m smaller, we can control the approximation error and have it concentrate on smaller magnitudes. That subsequently translates to a more accurate inner product between a fixed query and a randomly drawn document vector. As a final remark on Weak Sinnamon, we note that when 𝑛 is larger than the number of non- zero coordinates in a document vector, the resulting sketch itself is sparse. Furthermore, sketching using Weak Sinnamon only requires O (𝜓 ) operations, with 𝜓 denoting the number of non-zero coordinates, while the JL transform has a sketching complexity of O (𝑛𝜓 ). As we explain later, these properties will play a key role in the efficiency of sparse MIPS.
2309.09013#48
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
49
4.3 Empirical Comparison Our results from the preceding sections shed light on how JL and Weak Sinnamon transformations are expected to behave when applied to sparse vectors. Our main conclusion is that the sparsity rate heavily affects the approximation error. In this section, we design experiments that help us observe the expected behavior in practice and compare the two dimensionality reduction algorithms on real data. Given a sparse dataset and a set of queries, we first obtain the exact top-1 document for each query by performing an exhaustive search over the entire collection. We then create a second dataset wherein each vector is a sketch of a vector in the original dataset. We now perform exact search over the sketch dataset to obtain top-𝑘 ′ (𝑘 ′ ≥ 1) documents, and report the accuracy of the approximate retrieval. There are two parameters in the setup above that are of interest to us. First is the sketch size, 𝑛. By fixing the dataset (thus its sparsity rate) but increasing the sketch size, we wish to empirically quantify the effect of using larger sketches on the ability of each algorithm to preserve inner product. Note that, because the vectors are non-negative, Weak Sinnamon only uses half the sketch capacity to form the upper-bound sketch—reducing its effective sketch size to 𝑛/2.
2309.09013#49
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
50
The second factor is 𝑘 ′ which controls how “hard” a retrieval algorithm must work to compensate for the approximation error. Changing 𝑘 ′ helps us understand if the error introduced by a particular sketch size can be attenuated by simply retrieving more candidates and later re-ranking them according to their exact score. The results of our experiments are presented in Figure 1 for select datasets embedded with the Splade model. We chose these datasets because they have very different sizes and sparsity rates, as shown in Table 1, with Quora having the largest sparsity rate and fewest documents, and NQ the smallest sparsity rate and a medium collection size. Naturally, our observations are consistent with what the theoretical results predict. The sketch quality improves as its size increases. That shows the effect of the parameter 𝑛 on the approximation variance of the JL transform and the concentration of error in Weak Sinnamon sketches. 111:13 111:14 111:14 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty (a) Quora (b) NQ
2309.09013#50
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
51
111:13 111:14 111:14 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty (a) Quora (b) NQ Fig. 1. Top-1 accuracy of retrieval for test queries over sketches produced by JL transform (left column), Weak Sinnamon (middle column), and, as a point of reference, the original Sinnamon algorithm (right column). We retrieve the top-𝑘′ documents by performing an exhaustive search over the sketch collection and re-ranking the candidates by exact inner product to obtain the top-1 document and compute accuracy. Each line in the figures represents a different sketch size 𝑛. We note that Weak Sinnamon and Sinnamon only use half the sketch to record upper-bounds but leave the lower-bound sketch unused because Splade vectors are non-negative. That implies that their effective sketch size is half that of the JL transform’s.
2309.09013#51
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
52
Another unsurprising finding is that Weak Sinnamon’s sensitivity to the 𝜓 /𝑛 factor becomes evident in NQ: When the ratio between the number of non-zero coordinates and the sketch size (𝜓 /𝑛) is large, the variance of the approximation error becomes larger. The reason is twofold: more non-zero coordinates are likely to collide as vectors become more dense; and, additionally, sketches themselves become more dense, thereby increasing the likelihood of error for inactive coordinates. To contextualize Weak Sinnamon and the effects of our modifications to the original algorithm on the approximation error, we also plot in Figure 1 the performance of Sinnamon. While increasing the sketch size is one way to lower the probability of error, casting a wider net (i.e., 𝑘 ′ > 𝑘) followed by re-ranking appears to also improve retrieval quality.
2309.09013#52
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
53
Now that we have a better understanding of the effect of the parameters on the quality of the sketching algorithms, let us choose one configuration and repeat the experiments above on all our datasets. One noteworthy adjustment is that we set Weak Sinnamon’s effective sketch size to match that of the JL transform’s: As we noted, because Weak Sinnamon leaves the lower-bound sketch unused for non-negative vectors, we re-allocate it for the upper-bound sketch, in effect giving Weak Sinnamon’s upper-bound sketch 𝑛 dimensions to work with. Another change is that we use a more challenging configuration and perform top-10 retrieval. Finally, we also include Efficient Splade for completeness. # Bridging Dense and Sparse Maximum Inner Product Search (a) Splade (b) Efficient Splade
2309.09013#53
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
54
# Bridging Dense and Sparse Maximum Inner Product Search (a) Splade (b) Efficient Splade Fig. 2. Top-10 accuracy of retrieval for test queries over sketches of size 𝑛 = 1024 produced by JL transform (left column), Weak Sinnamon (middle column), and, for reference, the original Sinnamon algorithm (right column). As in Figure 1, we retrieve the top-𝑘′ documents by performing an exhaustive search over the sketch collection and re-ranking the candidates by exact inner product to obtain the top-10 documents and compute accuracy. Similarly, each line in the figures represents a different sketch size 𝑛. In these experiments, however, we adjust the effective sketch size of Weak Sinnamon and Sinnamon to match that of the JL transform’s. Figure 2 shows the results of these experiments. The general trends observed in these figures are consistent with the findings of Figure 1: Obtaining a larger pool of candidates from sketches and re-ranking them according to their exact inner product is a reliable way of countering the approximation error; and, Weak Sinnamon generally underperforms the JL transform in preserving inner product between vectors. Additionally, as vectors become more dense, the sketching quality degrades, leading to a higher approximation error.
2309.09013#54
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
55
Another interesting but expected phenomenon is that sketching performs comparatively poorly on Efficient Splade. That is because, query vectors generated by the Efficient Splade model are more sparse than those made by Splade. When a query has few non-zero coordinates, the expected inner product becomes small while the variance of JL transform sketches concentrates around a constant, as predicted by Theorem 4.3. As for Weak Sinnamon, when queries have a large number of non-zero coordinates, the shape of the distribution of error becomes less sensitive to the approximation error of individual coordinates; with fewer non-zero coordinates in the query vector, the opposite happens. As a final observation, we notice that retrieval accuracy is generally higher for Quora, MS Marco, and NQ datasets. That is easy to explain for Quora as it is a more sparse dataset with a much smaller 𝜓 /𝑛. On the other hand, the observed trend is rather intriguing for a larger and more dense dataset such as MS Marco. On closer inspection, however, it appears that the stronger performance can be attributed to the probabilities of coordinates being non-zero (i.e., 𝑝𝑖 ’s). In 111:15 111:15 111:16 111:16 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty (a) Splade (b) Efficient Splade
2309.09013#55
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
56
111:16 111:16 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty (a) Splade (b) Efficient Splade Fig. 3. Probability of each coordinate being non-zero (𝑝𝑖 for coordinate 𝑖) for Splade and Efficient Splade vectors of several datasets. To aid visualization, we sort the coordinates by 𝑝𝑖 ’s in descending order. A Zipfian distribution would manifest as a line in the log-log plot. Notice that, this distribution is closer to uniform for MS Marco than others. Figure 3, we plot the distribution of 𝑝𝑖 ’s but, to make the illustration cleaner, sort the coordinates by their 𝑝𝑖 in descending order. Interestingly, the distribution of 𝑝𝑖 ’s is closer to uniform for MS Marco and NQ, while it is more heavily skewed for Fever, DBPedia, and HotpotQA.
2309.09013#56
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
57
5 EVALUATION OF CLUSTERING OVER SKETCHES OF SPARSE VECTORS In the preceding section, we were squarely concerned with the ability of the two sketching al- gorithms in approximately preserving inner product between a query vector and an arbitrary document vector. That analysis is relevant if one were to directly operate on sketches as opposed to the original vectors when, say, building a graph-based nearest neighbor search index such as HNSW [50] or IP-NSW [55]. In this work, our primary use for sketches is to form partitions in the context of Algorithms 1 and 2: Whether R searches over sketches or the original vectors is left as a choice. In that framework, Section 4 has already studied the first line of the two algorithms: sketching the sparse vectors. In this section, we turn to the clustering procedure and empirically evaluate two alternatives: Standard and spherical KMeans. Note that, the clustering choice is the last piece required to complete the two algorithms and apply IVF-style search to sparse vectors.
2309.09013#57
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
58
Standard KMeans is an iterative protocol that partitions the input data into a predefined number of clusters, 𝐾. It first samples 𝐾 arbitrary points, called “centroids,” from the data distribution at random—though there are other initialization protocols available, such as KMeans++ [5]. It then repeats until convergence two steps: It assigns each data point to the nearest centroid by their Euclidean distance to form partitions in the first step; and, in the second step, recomputes the centroids to be the mean of the mass of all data points assigned to each partition. While this Expectation-Maximization procedure may fall into local optima, it generally produces partitions that approximate Voronoi regions in a dataset. # Bridging Dense and Sparse Maximum Inner Product Search
2309.09013#58
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
59
# Bridging Dense and Sparse Maximum Inner Product Search Spherical KMeans works similarly, with the notable exception that at the end of each iteration, it normalizes the centroids so that they are projected onto the unit sphere. This form of clustering has been used in the past for a topical analysis of text documents [21] among other applications. Both of these clustering algorithms are popular choices in the IVF-based approximate nearest neighbor search as evidenced by their integration into commonly used software packages such as FAISS [32]. As such, we plug the two methods into Algorithms 1 and 2 and apply them to our datasets. Our objective is to understand the differences between the two clustering choices in terms of their role in the overall retrieval quality as well as their sensitivity to the choice of sketching algorithm. 5.1 Empirical Comparison We begin by emphasizing that, in this particular section, we do not pay attention to speed and only report accuracy as a function of the total number of documents examined, ℓ, in Algorithm 2. Additionally, we use an exact, exhaustive search algorithm as R over the original vectors to find the final top-𝑘 candidates once the ℓ-subset of a dataset has been identified.
2309.09013#59
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
60
Before we state our findings, a note on our choice of “the number of documents examined” (ℓ) versus the more familiar notion of “the number of clusters searched” (known commonly as nProbe): The standard KMeans algorithm is highly sensitive to vector norms. That is natural as the algorithm cares solely about the Euclidean distance between points within a partition. When it operates on a collection of vectors with varying norms, then, it is intuitive that it tends to isolate high-normed points in their own, small partitions, while lumping together the low-normed vectors into massive clusters. As a result of this phenomenon, partitions produced by standard KMeans are often imbalanced. Probing a fixed number of partitions at search time puts KMeans at an unfair disadvantage compared to its spherical variant. By choosing to work with ℓ rather than fixating on the number of top clusters we remove that variable from the equation.
2309.09013#60
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
61
Figure 4 summarizes our results for the Splade-generated vectors. We plot one figure per dataset, where each figure depicts the relationship between top-10 accuracy and ℓ (expressed as percentage of the total number of documents). When applying Algorithm 1 to the datasets, we set the sketch size to 1024 as per findings of Section 4. Additionally, we fix the number of partitions 𝑃 to 4√︁|X| where |X| is the number of documents in a dataset X. Plots for Efficient Splade are shown separately in Figure 5. One of the most striking observations is that spherical KMeans appears to be a better choice universally on the vector datasets we examine in this work. By partitioning the data with spherical KMeans in Algorithm 1 and examining at most 10% of the collection, we often reach a top-10 accuracy well above 0.8 and often 0.9. This is in contrast to the performance of standard KMeans which often lags behind.
2309.09013#61
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
62
We are also surprised by how little the choice of JL transform versus Weak Sinnamon appears to matter, in the high-accuracy regime, for the purposes of partitioning with spherical KMeans and retrieval over the resulting partitions. When the clustering method is the standard KMeans, on the other hand, the difference between the two sketching algorithms is sometimes more noticeable. Additionally, and perhaps unsurprisingly, the difference between the two sketching methods is more pronounced in experiments on the Efficient Splade vector datasets. 6 CLUSTERING AS DYNAMIC PRUNING FOR THE INVERTED INDEX Throughout the previous sections, we simply assumed that once Algorithm 2 has identified the top partitions and accumulated the ℓ-subset of documents to examine, the task of actually finding the 111:17 111:18 111:18 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty
2309.09013#62
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
63
111:17 111:18 111:18 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty 0.95; 0.95; 0.9: 0.90 0.90. 0.85; 0.85; 0.7: il = 0.80, J z 0.6 0.75, 0.70) os 0.70 0.65] 4 --Spuerican - JL —+Sriericat - Weak Sinnamon #-Stanpanp - JL 0.65 0.60 4 —Staxparp - Weak Snvwastox ou 0.0 25 5.0, 75 10.0 0.0 25 5.0 75 10.0 0.0 25 5.0, 75 % Docs PRoBED % Docs PRoBED % Docs Propep # (a) MS Marco # (b) NQ # (c) Quora (d) HotpotQA (e) Fever (f) DBPedia
2309.09013#63
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
64
# (a) MS Marco # (b) NQ # (c) Quora (d) HotpotQA (e) Fever (f) DBPedia Fig. 4. Top-10 accuracy of Algorithm 2 for Splade vectors versus the number of documents examined (ℓ)— expressed as percentage of the size of the collection—for different clustering algorithms (standard and spherical KMeans) and different sketching mechanisms (JL transform and Weak Sinnamon, with sketching size of 1024). Note that the vertical axis is not consistent across figures. top-𝑘 vectors from that restricted subset would be delegated to a secondary MIPS algorithm, R, which we have thus far ignored. We now wish to revisit R. 10.0 # Bridging Dense and Sparse Maximum Inner Product Search 0.94 => -SPHERICAL - JL —+SpuericaL - Weak Sinnamon sa-Sranparp - JL 04 *STANDARD - WEAK SINNAMON oo 2.5 5.0 5 75 10.0 00 25 5.0 7 10.0 a0 25 50 75 10.0 % Docs Prosep % Docs Prose % Docs Prowep # (a) MS Marco # (b) NQ # (c) Quora (d) HotpotQA (e) Fever (f) DBPedia
2309.09013#64
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
65
# (a) MS Marco # (b) NQ # (c) Quora (d) HotpotQA (e) Fever (f) DBPedia Fig. 5. Top-10 accuracy of Algorithm 2 for Efficient Splade vs. the number of documents examined (ℓ). There are many ways one could design and implement R and apply it to the set of partitions PI on Line 10 of Algorithm 2. For example, R may be an exhaustive search—an option we used previously because we argued we were assessing retrieval quality alone and did not concern ourselves with efficiency. As another example, if partitions are stored on separate physical (or logical) retrieval nodes in a distributed system, each node could use an inverted index-based algorithm to find the 111:19 111:20 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty Algorithm 3: Constructing a partitioned inverted index Input: Collection of sparse vectors, X ⊂ R𝑁 ; Clusters P obtained from Algorithm 1. Result: Inverted index, I; Skip list, S.
2309.09013#65
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
66
1: I ← ∅ ; 2: S ← ∅ ; 3: for P𝑖 ∈ P do 4: 5: 6: 7: ⊲ Initialize the inverted index ⊲ Initialize the skip list SortAscending(P𝑖 ) ; for 𝑗 ∈ P𝑖 do ⊲ Sort partition by document identifier for 𝑡 ∈ nz(𝑥 ( 𝑗 ) ) do S [𝑡].Append(𝑖, |I [𝑡]|) if it is the first time a document from P𝑖 is recorded in I [𝑡] I [𝑡].Append( 𝑗, 𝑥 ( 𝑗 ) ) ; ⊲ Append document identifier and value to list 8: 9: 10: 11: end for 12: return I, S 𝑡 end for end for top-𝑘 candidates from their partition of the index. This section proposes a novel alternative for R that is based on the insight that clustering documents for IVF-based search and dynamic pruning algorithms in the inverted index-based top-𝑘 retrieval literature are intimately connected.
2309.09013#66
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
67
6.1 Partitioning Inverted Lists Consider an optimal partitioning P∗ of a collection X of sparse vectors into 𝑃 clusters with a set of representative points C∗. In the context of MIPS, optimality implies that for any given sparse query 𝑞, we have that the solution to C𝑖 = arg max𝑐 ∈ C∗ ⟨𝑞, 𝑐𝑖 ⟩ represents the partition P𝑖 in which we can find the maximizer of arg max𝑥 ∈ X ⟨𝑞, 𝑥⟩. That implies that, when performing MIPS for a given query, we dynamically prune the set of documents in X \ P𝑖 ; the procedure is dynamic because P𝑖 depends on the query vector.
2309.09013#67
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
68
Consider now an inverted index that represents X. Typically, its inverted lists are sorted either by document identifiers or by the “impact” of each document on the final inner product score [68]. The former is consequential for compression [60] and document-at-a-time dynamic pruning algo- rithms [68], while the latter provides an opportunity for early-termination of score computation—we reiterate that, all of these techniques work only on non-negative vectors or that their extension to negative vectors in non-trivial. But, as we explain, P∗ induces another organization of inverted lists that will enable fast, approximate retrieval in the context of Algorithm 2 for general sparse vectors. Our construction, detailed in Algorithm 3, is straightforward. At a high level, when forming an inverted list for a coordinate 𝑡, we simply iterate through partitions and add vectors from that partition whose coordinate 𝑡 is non-zero to the inverted list. As we do so, for each inverted list, we record the offsets within the list of each partition in a separate skip list. Together the two structures enable us to traverse the inverted lists by only evaluating documents in a given set of partitions. An alternative way of viewing the joint inverted and skip lists is to think of each inverted list as a set of variable-length segments or blocks, where documents within each block are grouped according to a clustering algorithm.
2309.09013#68
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
69
Before we demonstrate the retrieval logic, we must remark on the space complexity of the resulting structure. There are two factors to comment on. First, sorting the inverted lists by partition identifier rather than document identifier may lead to suboptimality for compression algorithms. That is because, the new arrangement of documents may distort the 𝑑-gaps (i.e., the difference # Bridging Dense and Sparse Maximum Inner Product Search
2309.09013#69
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
70
# Bridging Dense and Sparse Maximum Inner Product Search Algorithm 4: Query processing over partitioned inverted lists Input: Inverted index, I; Skip list, S obtained from Algorithm 3; Sparse query vector, 𝑞; Set of partitions to probe, PI from Algorithm 2. Result: Top 𝑘 vectors. 1: scores ← ∅ ; 2: for 𝑡 ∈ nz(𝑞) do 3: 4: 5: ⊲ A mapping from documents to scores SLPosition ← 0 ; for P𝑖 ∈ PI do ⊲ Pointer into the skip list S [𝑡] Advance SLPosition until partition of S [𝑡] [SLPosition] matches P𝑖 begin ← S [𝑡] [SLPosition].Offset end ← S [𝑡] [SLPosition + 1].Offset for (docid, value) ∈ I [𝑡] [begin . . . end] do 6: 7: 8: 9: 10: 11: 12: end for 13: return Top 𝑘 documents given scores scores[docid] ← scores[docid] + 𝑞𝑡 × value end for end for
2309.09013#70
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
71
between two consecutive document identifiers in an inverted list); compression algorithms perform better when 𝑑-gaps are smaller and when there is a run of the same 𝑑-gap in the list. But we can address that concern trivially through document identifier reassignment: After partitioning is done by Algorithm 1, we assign new identifiers to documents such that documents within a partition have consecutive identifiers. The second factor is the additional data stored in S. In the worst case, each inverted list will have documents from every partition. That entails that each S [𝑡] records 𝑃 additional pairs of integers consisting of partition identifier and the offset within the inverted list where that partition begins. As such, in the worst case, the inverted index is inflated by the size of storing 2𝑁 𝑃 integers. However, given that 𝑃 is orders of magnitude smaller than the total number of non-zero coordinates in the collection, and as such 2𝑁 𝑃 ≪ 𝜓 |X|, the increase to the total size of the inverted index is mild at worst. Moreover, skip lists can be further compressed using an integer or integer-list codec.
2309.09013#71
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
72
6.2 Query Processing over Partitioned Inverted Lists When Algorithm 2 gives us a set of partitions Pz to probe, we use a simple coordinate-at-a-time scheme to compute the scores of documents in () Py and return the top-k vectors. When processing coordinate 𝑡 and accumulating partial inner product scores, we have two operations to perform. First, we must take the intersection of the skip list and the list of whitelisted partitions: PI ∩ S [𝑡].PartitionId (where the operator PartitionId returns the partition identifier of every element in the skip list). Only then do we traverse the inverted list I [𝑡] by looking at the offsets of partitions in the intersection set. One possible instance of this procedure is described in Algorithm 4. 6.3 Empirical Evaluation There are four key properties that we wish to evaluate. Naturally, we care about the efficiency of Algorithms 3 and 4 when we use them as R in Algorithm 2. But, seeing as the partitioning performed by Algorithm 1 is not guaranteed to be the optimal partitioning P∗, we understand there is a risk of losing retrieval accuracy by probing a fraction of partitions, as demonstrated in 111:21 111:21 111:22 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty
2309.09013#72
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
73
111:21 111:21 111:22 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty Section 5. As such, the second important property is the effectiveness of the methods presented here. We thus report throughput versus accuracy as one trade-off space of interest. We also presented Algorithms 3 and 4 as a new dynamic pruning method for the inverted index. To show that for different levels of accuracy, we indeed prune the inverted lists, we additionally report the size of the pruned space as we process queries. A third factor is the size of the inverted index and the inflation due to (a) the additional data structure that holds skip pointers and (b) the partition centroids produced by Algorithm 1. We also evaluate this aspect, but we do not apply compression anywhere in our evaluation: We consider compression to be orthogonal to this work and only report the overhead. Finally, we implemented Algorithms 1 through 4 by enabling parallelism within and across queries. We believe, therefore, it is important to measure the effect of the number of CPU cores on throughput. As such, we present throughput measurements by changing the number of cores we make available to the algorithms.
2309.09013#73
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
74
6.3.1 Baseline Retrieval Algorithm. As argued earlier, we are interested in general sparse vectors, such as those produced by Splade, which exhibit distributional properties that differ from traditional sparse vectors based on lexical models of relevance. It has been noted by others [16, 48] that an exhaustive disjunctive query processing over the inverted index—a method Bruch et al. referred to as LinScan—outpeforms all dynamic pruning-based optimization methods and represents a strong baseline. We therefore use LinScan as our baseline system. LinScan is a safe algorithm as it evaluates every qualified document (i.e., documents that contain at least one non-zero coordinate of the query vector). But as Bruch et al. show in [16], there is a simple strategy to turn LinScan into an approximate algorithm: By giving the algorithm a time budget, we can ask it to process as many coordinates as possible until the budget has been exhausted. At that point, LinScan returns the approximate top-𝑘 set according to the accumulated partial inner product scores. We use this variant to obtain approximate top-𝑘 sets for comparison with our own approximate algorithms.
2309.09013#74
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
75
6.3.2 Throughput versus Accuracy. The first topic of evaluation is the trade-off between throughput and accuracy. We can trade one factor off for the other by adjusting the parameter ℓ in Algorithm 2: A smaller ℓ will result in probing fewer partitions, which in turn leads to faster retrieval but lower quality. Letting ℓ approach the size of the collection, on the other hand, results in the algorithm probing every partition, leading to a slower but higher-quality retrieval. We tune this knob as we perform top-10 retrieval over our datasets. We use Splade and Efficient Splade vectors as input to the algorithms, sketch them using the JL and Weak Sinnamon transforms, but partition the data only using spherical KMeans. The results of our experiments are shown in Figures 6 and 7. In order to digest the trends, we must recall that the throughput of our retrieval method is affected by two factors: the time it takes to perform inner product of a query vector with cluster centroids, and the time it takes to execute algorithm R on the subset of partitions identified from the previous step. In the low-recall regime, we expect the first factor to make up the bulk of the processing time, while in the high-recall regime the cost of executing R starts to dominate the overall processing time.
2309.09013#75
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
76
That phenomenon is evident in the figures for both Splade and Efficient Splade experiments. That also explains why when sketching is done with Weak Sinnamon, throughput is much better than the JL transform: Weak Sinnamon creates sparse query sketches which lead to faster inner product computation with partition centroids. What is also clear from our experiments is that our approximate method always compares favorably to the approximate baseline. In fact, for the same desired accuracy, our method often # Bridging Dense and Sparse Maximum Inner Product Search (a) MS Marco (b) NQ (c) Quora # (d) HotpotQA # (e) Fever # (f) DBPedia Fig. 6. Throughput (as queries per second) versus top-10 retrieval accuracy on Splade-encoded datasets. We limit the experiments to an instance of Algorithm 1 that uses spherical KMeans. Included here is an approximate variant of an exhaustive disjunctive query processor (LinScan). We use 20 CPU cores and repeat each experiment 10 times for a more reliable throughput measurement. Axes are not consistent across figures. reaches a throughput that is orders of magnitude larger than that of the baseline’s. For instance, on MS Marco encoded with Splade, an instance of our algorithm that operates on Weak Sinnamon 111:23 111:24 111:24
2309.09013#76
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
77
111:23 111:24 111:24 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty (a) MS Marco (b) NQ (c) Quora (d) HotpotQA (e) Fever (f) DBPedia Fig. 7. Throughput vs. top-10 retrieval accuracy on Efficient Splade-encoded datasets. Setup is as in Figure 6. Fig. 7. Throughput vs. top-10 retrieval accuracy on EFFICIENT SPLADE-encoded datasets. Setup is as in Figure 6. sketches processes queries at an extrapolated rate of approximately 2,000 queries per second and delivers 90% accuracy, while the baseline method yields a throughput of roughly 150 queries per second. At lower recalls, the gap is substantially wider. As we require a higher accuracy, all methods become slower. Ultimately, of course, if we set ℓ too high, our algorithms become slower than the exact baseline. That is because, our approximate # Bridging Dense and Sparse Maximum Inner Product Search (a) Splade (b) Efficient Splade
2309.09013#77
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
78
# Bridging Dense and Sparse Maximum Inner Product Search (a) Splade (b) Efficient Splade Fig. 8. Percentage of qualified documents (i.e., documents that contain at least one non-zero coordinate of the query) pruned versus top-10 accuracy for the MS Marco dataset. In this setup, Algorithm 1 uses Weak Sinnamon along with spherical KMeans for partitioning. Note the irregular spacing of the horizontal axes. algorithms have to pay the price of computing inner product with centroids and must execute the additional step of intersecting PI with the skip lists. We do not show this empirically, however. 6.3.3 Effect of Dynamic Pruning. As we already explained, when we adjust the parameter ℓ in Algorithm 2, we control the number of documents the sub-algorithm R is allowed to evaluate. While we studied the impact of ℓ on efficiency as measured by throughput, here we wish to understand its effect in terms of the amount of pruning it induces. While throughput measurements depend on our specific implementation of Algorithm 4, measuring the portion of documents pruned is implementation-agnostic and, as such, serves as a more definitive measure of efficiency.
2309.09013#78
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
79
To that end, we count, for each query, the actual number of documents evaluated by Algorithm 4 as we gradually increase ℓ. We plot this quantity in Figure 8 for MS Marco from a configuration of our algorithms that uses Weak Sinnamon and spherical KMeans. To improve visualization, we show not raw counts, but the percentage of qualified documents—defined, once again, as the number of documents that contain at least one non-zero coordinate of the query—that Algorithm 4 evaluates. That is indicative of how much of the inverted lists the algorithm manages to skip. As one observes, in the low-recall region, the algorithm probes only a fraction of the inverted lists. On Splade dataset, the algorithm reaches a top-10 accuracy of 0.94 by merely evaluating, on average, about 10% of the total number of documents in the inverted lists. On Efficient Splade, as expected, the algorithm is relatively less effective. These results are encouraging. It shows the potential that a clustering-based organization of the inverted index has for dynamic pruning in approximate MIPS. Importantly, this method does not require the vectors to follow certain distributions or be non-negative.
2309.09013#79
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
80
Index Size Overhead. As we mentioned earlier, our algorithms add overhead to the index 6.3.4 structure required for query processing. If our reference point is the LinScan algorithm with a basic (uncompressed) inverted index, our methods introduce two additional structures: (a) the skip 111:25 111:25 111:26 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty Table 2. Index sizes in GB. The index in LinScan is made up of an inverted index with document identifiers and floating point values (uncompressed). The index in our method stores 4√︁|X| centroids from the application of spherical KMeans to Weak Sinnamon for dataset X, an inverted index with the same size as LinScan, and the skip list structure S.
2309.09013#80
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
82
list, S, in Algorithm 3; and, (b) the array of 4√︁|X| centroids produced by Algorithm 1. We next measure this overhead. We report our findings in Table 2 for Splade and Efficient Splade vector datasets, measured in GB of space after serialization to disk. We reiterate that, we do not apply compression to the index. That is because there is an array of compression techniques that can be applied to the different parts of the data structure (such as quantization, approximation, and 𝑑-gap compression). Choosing any of those would arbitrarily conflate the inflation due to the overhead and the compression rate. We observe that the overhead of our method on larger datasets is relatively mild. The increase in size ranges from 6% to 10% (Quora excluded) for the Splade-encoded datasets and a slightly wider and large range for Efficient Splade-encoded datasets.
2309.09013#82
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
83
6.3.5 Effect of Parallelism. We conclude the empirical evaluation of our approximate algorithm by repeating the throughput-accuracy experiments with a different number of CPUs. In our imple- mentation, we take advantage of access to multiple processors by parallelizing the computation of inner product between queries and centroids (in Algorithm 2) for each query, in addition to distributing the queries themselves to the available CPUs. As a result of this concurrent paradigm, we expect that, by reducing the number of CPUs available to the algorithm, throughput will be more heavily affected in low-recall regions (when ℓ is small). Figure 9 shows the results of these experiments on the Splade- and Efficient Splade-encoded MS Marco dataset. The figures only include a configuration of our algorithms with spherical KMeans and Weak Sinnamon. It is easy to confirm that our hypothesis from above holds: In low-recall regions where computation is heavily dominated by the cost of computing inner product with centroids, throughput decreases considerably as we reduce the number of CPUs.
2309.09013#83
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
84
7 TOWARDS A UNIFIED FRAMEWORK FOR MIPS Sections 4 through 6 presented a complete instance of Algorithm 2 for IVF-based MIPS over sparse vectors. But, recall that, we borrowed the idea of IVF-based search from the dense MIPS literature. So it is only natural to pose the following question: Now that we have an arbitrarily-accurate IVF algorithm for sparse vectors, can we extend it to hybrid vectors in R𝑚+𝑁 ? In this section, we unpack that question superficially and investigate possible directions at a high level to explore the feasibility and benefits of such an approach. First, however, let us motivate this question. 7.1 Motivation We described the changing landscape of retrieval in Section 1. From lexical-semantic search to multi-modal retrieval, for many emerging applications the ability to conduct MIPS over hybrid vectors efficiently and effectively is a requisite. One viable approach to searching over a collection # Bridging Dense and Sparse Maximum Inner Product Search (a) Splade (b) Efficient Splade Fig. 9. Effect of changing the number of CPUs on throughput. The figures illustrate these measurements for MS Marco, and a particular configuration of our algorithm that uses spherical KMeans over Weak Sinnamon sketches. We include LinScan executed on 20 CPUs from Figure 6 and 7 as a point of reference.
2309.09013#84
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
85
of hybrid vectors X is to simply decompose the process into separate MIPS questions, one over the dense subspace X𝑑 and the other over the sparse one X𝑠 , followed by an aggregation of the retrieved sets. Indeed this approach has become the de facto solution for hybrid vector retrieval [12, 17]. The two-stage retrieval system works as follows: When a hybrid query vector 𝑞 ∈ R𝑚+𝑁 arrives and the retrieval system is expected to return the top 𝑘 documents, commonly, 𝑞𝑑 is sent to the dense MIPS system with a request for the top 𝑘 ′ ≥ 𝑘 vectors, and 𝑞𝑠 to the sparse retrieval component with a similar request. Documents in the union of the two sets are subsequently scored and reranked ˜S: to produce an approximate set of top-𝑘 vectors, ˜S = (𝑘 ) arg max 𝑥 ∈ S𝑑 ∪S𝑠 ⟨𝑞, 𝑥⟩, (10)
2309.09013#85
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
86
S𝑑 = (𝑘 ′ ) arg max 𝑥 ∈ X ⟨𝑞𝑑, 𝑥𝑑 ⟩ and, S𝑠 = (𝑘 ′ ) arg max 𝑥 ∈ X ⟨𝑞𝑠, 𝑥𝑠 ⟩. (11) Let us set aside the effectiveness of the setup above for a moment and consider its complexity from a systems standpoint. It is clear that, both for researchers and practitioners, studying and creating 111:27 111:27 111:28 111:28 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty 1.0: en 0.9: 508 <s = 50.7 = S06 a —Weense = 0.2 Se 05 == -waense = 0.4 wWeense = 0.5 Wdense = 0.6 —wWaense = 0.8 20 40 60 80, 100 ki
2309.09013#86
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
87
Fig. 10. Top-10 accuracy of the two-stage retrieval system for hybrid vectors. We retrieve 𝑘′ candidates from each sub-system and rerank them to find the top-10 set. We prepare the hybrid vectors by first normalizing the dense and sparse parts separately, then constructing query vectors as follows: 𝑞 = 𝑤dense𝑞𝑑 + (1 − 𝑤dense)𝑞𝑠 , where 𝑞𝑑 and 𝑞𝑠 are sampled from the data distribution. In effect, 𝑤dense shifts the ℓ2 mass from the sparse to the dense subspace, giving more importance to one subspace over the other during retrieval. two disconnected, incompatible systems adds unwanted costs. For example, systems developers must take care to keep all documents in sync between the two indexes at all times. Reasoning about the (mis)behavior of the retrieval system, as another example, requires investigating one layer of indirection and understanding the processes leading to two separate retrieved sets. These collectively pose a challenge to systems researchers, and add difficulty to operations in production. Furthermore, it is easy to see that the least scalable of the two systems dictates or shapes the overall latency and throughput capacity.
2309.09013#87
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
88
Even if we accepted the cost of studying two separate systems or deemed it negligible, and further decided scalability is not a concern, it is not difficult to show that such a heterogeneous design may prove wasteful or outright ineffective in the general case. More concretely, depending on how the ℓ2 mass of the query and document vectors is split between the dense subspace and the sparse subspace, the two sub-systems involved may have to resort to a large 𝑘 ′ in order to ensure an accurate final retrieved set at rank 𝑘. While the phenomenon above is provable, we demonstrate its effect by a simple (though contrived) experiment. We generate a collection of 100,000 documents and 1,000 queries. Each vector is a hybrid of a dense and a sparse vector. The dense vectors are in R64, with each coordinate drawing its value from the exponential distribution (with scale 0.5). The sparse vectors are in R1000 with an average of 𝜓 = 16 non-zero coordinates, where non-zero values are drawn from the exponential distribution (scale 0.5). We use different seeds for the pseudo-random generator when creating document and query vectors.
2309.09013#88
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
89
In order to study how the ratio of ℓ2 mass between dense and sparse subspaces affects retrieval quality, we first normalize the generated dense and sparse vectors separately. During retrieval, we amplify the dense part of the query vector by a weight between 0 and 1 and multiply the sparse part by one minus that weight. In the end, we are performing retrieval for a query vector 𝑞 that can be written as 𝑤dense𝑞𝑑 + (1 − 𝑤dense)𝑞𝑠 . By letting 𝑤dense sweep the unit interval, we simulate a shift of the ℓ2 mass of the hybrid vector from the sparse to the dense subspace. Over the generated collection, we conduct exact retrieval using exhaustive search and obtain the top 𝑘 = 10 vectors for each query by maximizing the inner product. We then use the two-stage # Bridging Dense and Sparse Maximum Inner Product Search
2309.09013#89
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
90
# Bridging Dense and Sparse Maximum Inner Product Search Algorithm 5: Indexing of hybrid vectors Input: Collection X of hybrid vectors in R𝑚+𝑁 ; Number of clusters, 𝑃; Random projector, 𝜙 : R𝑁 → R𝑛 where 𝑛 ≪ 𝑁 ; Clustering algorithm Cluster that returns partitions of input data and their representatives. Result: Cluster assignments P𝑖 = { 𝑗 | 𝑥 ( 𝑗 ) ∈ Partition 𝑖} and cluster representatives C𝑖 ’s. ˜X ← {𝑥𝑑 ⊕ 𝜙 (𝑥𝑠 ) | 𝑥𝑑 ⊕ 𝑥𝑠 ∈ X} 1: 2: Partitions, Representatives ← Cluster( ˜X; 𝑃) 3: P𝑖 ← { 𝑗 | ˜𝑥 ( 𝑗 ) ∈ Partitions[𝑖]}, ∀1 ≤ 𝑖 ≤ 𝑃 4: C𝑖 ← Representatives[𝑖], ∀1 ≤ 𝑖 ≤ 𝑃 5: return P and C
2309.09013#90
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
91
Algorithm 6: Retrieval of hybrid vectors Input: Hybrid query vector, 𝑞 ∈ R𝑚+𝑁 ; Clusters and representatives, P, C obtained from Algorithm 5; random projector 𝜙 : R𝑁 → R𝑛; Number of data points to examine, ℓ ≤ |X| where |X| denotes the size of the collection; hybrid MIPS sub-algorithm R. Result: Approximate set of top 𝑘 vectors that maximize inner product with 𝑞. 1: 2: SortedClusters ← SortDescending(P by ⟨ ˜𝑞, C𝑖 ⟩) 3: TotalSize ← 0 4: I ← ∅ ; 5: for P𝜋𝑖 ∈ SortedClusters do 6: 7: 8: 9: end for 10: return Top 𝑘 vectors from partitions PI ≜ {P𝑖 | 𝑖 ∈ I} w.r.t ⟨𝑞, ·⟩ using R
2309.09013#91
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
92
design by asking each sub-system to return the (exact) top 𝑘 ′ vectors for 𝑘 ′ ∈ [100], and reranking the union set to obtain the final top 𝑘 = 10 documents. We then measure the top-𝑘 accuracy of the two-stage architecture. Figure 10 plots accuracy versus 𝑘 ′ for different values of 𝑤dense. It is easy to see that, as one subspace becomes more important than the other, the retrieval quality too changes. Importantly, a larger 𝑘 ′ is often required to attain a high accuracy. The factors identified in this section—systems complexity, scalability bottleneck, and the sub- optimality of retrieval quality—nudge us in the direction of a unified framework for MIPS.
2309.09013#92
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
93
The factors identified in this section—systems complexity, scalability bottleneck, and the sub- optimality of retrieval quality—nudge us in the direction of a unified framework for MIPS. 7.2 IVF MIPS for Hybrid Vectors We present a simple extension of the IVF indexing and retrieval duo of Algorithms 1 and 2 to generalize the logic to hybrid vectors. This is shown in Algorithms 5 and 6, where the only two differences with the original algorithms are that (a) sketching is applied only to the sparse portion of vectors to form new vectors in R𝑚+𝑛 instead of R𝑚+𝑁 , and (b) that the sub-algorithm R is assumed to carry out top-𝑘 retrieval over hybrid vectors from a given set of partitions. In this section, we only verify the viability of the extended algorithms and leave an in-depth investigation of the proposal to future work. As such, we use exhaustive search as the sub-algorithm 111:29 111:30 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty R and acknowledge that any observations made using such an algorithm only speaks to the effectiveness of the method and not its efficiency.
2309.09013#93
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
94
R and acknowledge that any observations made using such an algorithm only speaks to the effectiveness of the method and not its efficiency. 7.3 Empirical Evaluation Let us repeat the experiment from Section 7.1 on synthetic vectors and compare the two-stage retrieval process with the unified framework in terms of retrieval accuracy. To that end, we design the following protocol. First, we perform exact MIPS using exhaustive search over the hybrid collection of vectors. The set of top-𝑘 documents obtained in this way make up the ground-truth for each query. Next, we consider the two-stage system. We retrieve through exhaustive search the exact set of top-𝑘 ′ (for a large 𝑘 ′) documents according to their sparse inner product, and another (possibly overlapping) set by their dense inner product. From the two ranked lists, we accumulate enough documents from the top such that the size of the resulting set is roughly equal to 𝑘. In this way, we can measure the top-𝑘 accuracy of the two-stage system against the ground-truth. Finally, we turn to the unified framework. We use the JL transform to reduce the dimensionality of sparse vectors, and spherical KMeans to partition the vectors. We then proceed as usual and measure top-𝑘 accuracy for different values of ℓ.
2309.09013#94
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
95
From these experiments, we wish to understand whether and when the accuracy of the unified framework exceeds the accuracy of the two-stage setup. If the unified system is able to surpass the accuracy of the two-stage system by examining a relatively small portion of the collection—a quantity controlled through ℓ—then that is indicative of the viability of the proposal. Indeed, as Figure 11 shows, the unified system almost always reaches a top-10 accuracy that is higher than the two-stage system’s by evaluating less than 2% of the collection. 8 DISCUSSION AND CONCLUSION We began this research with a simple question: Can we apply dense MIPS algorithms to sparse vectors? That led us to investigate different dimensionality reduction techniques for sparse vectors as a way to contain the curse of dimensionality. We showed, for example, that the JL transform and Sinnamon behave differently on sparse vectors and can preserve inner product to different degrees. We also thoroughly evaluated the effect of clustering on sparse MIPS in the context of an IVF-based retrieval system. Coupling dimensionality reduction with clustering realized an effective IVF system for sparse vectors, summarized in Algorithms 1 and 2.
2309.09013#95
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
96
The protocol is easy to describe and is as follows. We sketch sparse vectors into a lower- dimensional (dense or sparse) subspace in a first step. We then apply clustering to the sketches and partition the data into a predetermined number of clusters, each identified by a representative (e.g., a centroid). When the system is presented with a query, we sketch the query (asymmetrically) and identify the top partitions by taking inner product between the query and cluster representatives. We then execute a secondary sub-algorithm to perform MIPS on the restricted subset of document vectors. In our presentation of the material above, we observed a strong, natural connection between clustering for IVF and dynamic pruning methods for inverted indexes. We developed that insight into an inverted index-based algorithm that could serve as the sub-algorithm in the above search procedure. Importantly, the algorithm organizes documents within an inverted list by partition identifier—rather than the conventional arrangement by document identifier or impact score. Such an organization, coupled with skip pointers, enables the algorithm to only search over the subset of documents that belong to the top partitions determined by the IVF method. Crucially, the algorithm is agnostic to the vector distribution and admits real-valued vectors. # Bridging Dense and Sparse Maximum Inner Product Search
2309.09013#96
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
97
# Bridging Dense and Sparse Maximum Inner Product Search (a) 𝑤dense = 0.2 (b) 𝑤dense = 0.5 (c) 𝑤dense = 0.8 Fig. 11. Top-10 accuracy over hybrid vectors as a function of the percentage of documents probed. 𝑤dense controls how much of the ℓ2 mass of a hybrid vector is concentrated in its dense subspace. We also plot the performance of the two-stage system where each system returns the set of top-𝑘′ documents according to sparse or dense inner product scores, such that the size of the union of the two sets is roughly 𝑘. Finally, we discussed how our proposal leads to a unified retrieval framework for hybrid vectors. By sketching the sparse sub-vectors and constructing an IVF index for the transformed hybrid vectors, we showed that it is possible to achieve better recall than a two-stage system, where dense and sparse sub-vectors are handled separately. The added advantage of the unified approach is that its accuracy remains robust under different vector distributions, where the mass shifts from the dense to the sparse subspace.
2309.09013#97
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
98
We limited our discussion of hybrid MIPS to synthetic vectors as we were only interested in the viability of this byproduct of our primary research question. We acknowledge that we have only scratched the surface of retrieval over hybrid vectors. There are a multitude of open questions within the unified regime that warrant further investigation, including many minor but practical aspects of the framework that we conveniently ignored in our high-level description. We leave those as future work. We believe our investigation of MIPS for sparse (and hybrid vectors) provides many opportunities for information retrieval researchers. One line of research most immediately affected by our proposal is sparse representation learning. Models such as Splade are not only competitive on in- and out-of-domain tasks, they also produce inherently-interpretable representations of text— a desirable behavior in many production systems. However, sparse embeddings have, by and large, been tailored to existing retrieval regimes. For example, Efficient Splade learns sparser queries for better latency. uniCoil [39] collapses term representations of Coil [26] to a scalar for compatibility with inverted indexes. We claim that our proposed regime is a step toward removing such constraints, enabling researchers to explore sparse representations without much restraint, leading to a potentially different behavior. As we observe in Figures 4 and 5, for example, Splade 111:31 111:31 111:32 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty
2309.09013#98
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
99
111:31 111:31 111:32 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty vectors are more amenable to clustering than Efficient Splade, and may even prove more efficient within the new framework. That is good news as there is evidence suggesting that Splade is more effective than its other variant on out-of-domain data [38]. Another related area of research that can benefit from our proposed regime is multi-modal and multimedia retrieval. Because our framework is agnostic to the distribution of the hybrid vectors, it is entirely plausible to formulate the multi-modal problem as MIPS over hybrid vectors, especially when one of the modes involves textual data, is data that is partially sparse, or where one may need to engineer (sparse) features to augment dense embeddings. REFERENCES [1] Nir Ailon and Bernard Chazelle. 2006. Approximate Nearest Neighbors and the Fast Johnson-Lindenstrauss Transform. In Proceedings of the 38th Annual ACM Symposium on Theory of Computing (Seattle, WA, USA). 557–563. [2] Nir Ailon and Bernard Chazelle. 2009. The Fast Johnson–Lindenstrauss Transform and Approximate Nearest Neighbors. SIAM J. Comput. 39, 1 (2009), 302–322.
2309.09013#99
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
100
[3] Nir Ailon and Edo Liberty. 2011. An Almost Optimal Unrestricted Fast Johnson-Lindenstrauss Transform. In Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms (San Francisco, California). 185–191. [4] Nir Ailon and Edo Liberty. 2013. An Almost Optimal Unrestricted Fast Johnson-Lindenstrauss Transform. ACM Trans. Algorithms 9, 3, Article 21 (jun 2013), 12 pages. [5] David Arthur and Sergei Vassilvitskii. 2007. K-Means++: The Advantages of Careful Seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms (New Orleans, Louisiana). 1027–1035. [6] Nima Asadi. 2013. Multi-Stage Search Architectures for Streaming Documents. University of Maryland. [7] Nima Asadi and Jimmy Lin. 2013. Effectiveness/Efficiency Tradeoffs for Candidate Generation in Multi-Stage Retrieval Architectures. In Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval (Dublin, Ireland). 997–1000.
2309.09013#100
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
101
[8] Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, and Yoshua Bengio. 2015. Clustering is Efficient for Approximate Maximum Inner Product Search. arXiv:1507.05910 [cs.LG] [9] Yang Bai, Xiaoguang Li, Gang Wang, Chaoliang Zhang, Lifeng Shang, Jun Xu, Zhaowei Wang, Fangshan Wang, and Qun Liu. 2020. SparTerm: Learning Term-based Sparse Representation for Fast Text Retrieval. [10] Richard Baraniuk, M Davenport, Ronald DeVore, and M Wakin. 2006. The Johnson-Lindenstrauss lemma meets Compressed Sensing. IEEE Transactions on Information Theory 52 (01 2006), 1289–1306. [11] Andrei Z. Broder, David Carmel, Michael Herscovici, Aya Soffer, and Jason Zien. 2003. Efficient Query Evaluation Using a Two-Level Retrieval Process. In Proceedings of the Twelfth International Conference on Information and Knowledge Management (New Orleans, LA, USA). 426–434.
2309.09013#101
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
102
[12] Sebastian Bruch, Siyu Gai, and Amir Ingber. 2023. An Analysis of Fusion Functions for Hybrid Retrieval. ACM Transactions on Information Systems 42, 1, Article 20 (August 2023), 35 pages. [13] Sebastian Bruch, Claudio Lucchese, and Franco Maria Nardini. 2022. ReNeuIR: Reaching Efficiency in Neural In- formation Retrieval. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (Madrid, Spain). 3462–3465. [14] Sebastian Bruch, Claudio Lucchese, and Franco Maria Nardini. 2023. Efficient and Effective Tree-based and Neural Learning to Rank. Foundations and Trends in Information Retrieval 17, 1 (2023), 1–123. [15] Sebastian Bruch, Joel Mackenzie, Maria Maistro, and Franco Maria Nardini. 2023. ReNeuIR at SIGIR 2023: The Second Workshop on Reaching Efficiency in Neural Information Retrieval. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (Taipei, Taiwan). 3456–3459.
2309.09013#102
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
103
[16] Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty. 2023. An Approximate Algorithm for Maximum Inner Product Search over Streaming Sparse Vectors. ACM Transactions on Information Systems (July 2023). Just Accepted. [17] Tao Chen, Mingyang Zhang, Jing Lu, Michael Bendersky, and Marc Najork. 2022. Out-of-Domain Semantics to the Rescue! Zero-Shot Hybrid Retrieval Models. In Advances in Information Retrieval: 44th European Conference on IR Research, ECIR 2022, Stavanger, Norway, April 10–14, 2022, Proceedings, Part I (Stavanger, Norway). 95–110. [18] Matt Crane, J. Shane Culpepper, Jimmy Lin, Joel Mackenzie, and Andrew Trotman. 2017. A Comparison of Document- at-a-Time and Score-at-a-Time Query Evaluation. In Proceedings of the 10th ACM International Conference on Web Search and Data Mining (Cambridge, United Kingdom). 201–210. [19] Zhuyun Dai and Jamie Callan. 2020. Context-Aware Term Weighting For First Stage Passage Retrieval. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (Virtual Event, China). 1533–1536.
2309.09013#103
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
104
# Bridging Dense and Sparse Maximum Inner Product Search [20] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 4171–4186. [21] Inderjit S. Dhillon and Dharmendra S. Modha. 2001. Concept Decompositions for Large Sparse Text Data Using Clustering. Machine Learning 42, 1 (01 January 2001), 143–175. [22] Constantinos Dimopoulos, Sergey Nepomnyachiy, and Torsten Suel. 2013. Optimizing Top-k Document Retrieval Strategies for Block-Max Indexes. In Proceedings of the Sixth ACM International Conference on Web Search and Data Mining (Rome, Italy). 113–122. [23] Shuai Ding and Torsten Suel. 2011. Faster Top-k Document Retrieval Using Block-Max Indexes. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval (Beijing, China). 993–1002.
2309.09013#104
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
105
[24] Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and Stéphane Clinchant. 2022. From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (Madrid, Spain). 2353–2359. [25] Thibault Formal, Benjamin Piwowarski, and Stéphane Clinchant. 2021. SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (Virtual Event, Canada). 2288–2292.
2309.09013#105
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
106
[26] Luyu Gao, Zhuyun Dai, and Jamie Callan. 2021. COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021. 3030–3042. [27] Bob Goodwin, Michael Hopcroft, Dan Luu, Alex Clemmer, Mihaela Curmei, Sameh Elnikety, and Yuxiong He. 2017. BitFunnel: Revisiting Signatures for Search. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (Shinjuku, Tokyo, Japan). 605–614. [28] Ruiqi Guo, Philip Sun, Erik Lindgren, Quan Geng, David Simcha, Felix Chern, and Sanjiv Kumar. 2020. Accelerating Large-Scale Inference with Anisotropic Vector Quantization. In Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research). 3887–3896. [29] Qiang Huang, Jianlin Feng, Yikai Zhang, Qiong Fang, and Wilfred Ng. 2015. Query-Aware Locality-Sensitive Hashing
2309.09013#106
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
107
[29] Qiang Huang, Jianlin Feng, Yikai Zhang, Qiong Fang, and Wilfred Ng. 2015. Query-Aware Locality-Sensitive Hashing for Approximate Nearest Neighbor Search. Proc. VLDB Endow. 9, 1 (sep 2015), 1–12. [30] Piotr Indyk and Rajeev Motwani. 1998. Approximate Nearest Neighbors: Towards Removing the Curse of Dimension- ality. In Proceedings of the Thirtieth Annual ACM Symposium on Theory of Computing (Dallas, Texas, USA). 604–613. [31] Hervé Jégou, Matthijs Douze, and Cordelia Schmid. 2011. Product Quantization for Nearest Neighbor Search. IEEE Trans. Pattern Anal. Mach. Intell. 33, 1 (2011), 117–128. [32] Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2021. Billion-Scale Similarity Search with GPUs. IEEE Transactions on Big Data 7 (2021), 535–547. [33] William B. Johnson and Joram Lindenstrauss. 1984. Extensions of Lipschitz mappings into Hilbert space. Contemp. Math. 26 (1984), 189–206.
2309.09013#107
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
108
[34] Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense Passage Retrieval for Open-Domain Question Answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). [35] Hyunjoong Kim, Han Kyul Kim, and Sungzoon Cho. 2020. Improving spherical k-means for document clustering: Fast initialization, sparse centroid projection, and efficient cluster labeling. Expert Systems with Applications 150 (2020), 113288.
2309.09013#108
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
109
[36] Aditya Krishnan and Edo Liberty. 2021. Projective Clustering Product Quantization. arXiv:2112.02179 [cs.DS] [37] Saar Kuzi, Mingyang Zhang, Cheng Li, Michael Bendersky, and Marc Najork. 2020. Leveraging Semantic and Lexical Matching to Improve the Recall of Document Retrieval Systems: A Hybrid Approach. (2020). arXiv:2010.01195 [cs.IR] [38] Carlos Lassance and Stéphane Clinchant. 2022. An Efficiency Study for SPLADE Models. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (Madrid, Spain). 2220–2226. [39] Jimmy Lin and Xueguang Ma. 2021. A Few Brief Notes on DeepImpact, COIL, and a Conceptual Framework for Information Retrieval Techniques. arXiv:2106.14807 [cs.IR] [40] Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2021. Pretrained Transformers for Text Ranking: BERT and Beyond. arXiv:2010.06467 [cs.IR]
2309.09013#109
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
110
arXiv:2010.06467 [cs.IR] [41] Jimmy Lin and Andrew Trotman. 2015. Anytime Ranking for Impact-Ordered Indexes. In Proceedings of the 2015 International Conference on The Theory of Information Retrieval (Northampton, Massachusetts, USA). 301–304. [42] Jie Liu, Xiao Yan, Xinyan Dai, Zhirong Li, James Cheng, and Ming-Chang Yang. 2019. Understanding and Improving Proximity Graph based Maximum Inner Product Search. arXiv:1909.13459 [cs.IR] 111:33 111:34 111:34 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty [43] Changyi Ma, Fangchen Yu, Yueyao Yu, and Wenye Li. 2021. Learning Sparse Binary Code for Maximum Inner Product Search. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management (Virtual Event, Queensland, Australia). 3308–3312. [44] Ji Ma, Ivan Korotkov, Keith Hall, and Ryan T. McDonald. 2020. Hybrid First-stage Retrieval Models for Biomedical Literature. In CLEF. [45] Xueguang Ma, Kai Sun, Ronak Pradeep, and Jimmy J. Lin. 2021. A Replication Study of Dense Passage Retriever. (2021).
2309.09013#110
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
111
arXiv:2104.05740 [cs.IR] [46] Joel Mackenzie, Antonio Mallia, Alistair Moffat, and Matthias Petri. 2022. Accelerating Learned Sparse Indexes Via Term Impact Decomposition. In Findings of the Association for Computational Linguistics: EMNLP 2022. Association for Computational Linguistics, 2830–2842. [47] Joel Mackenzie, Matthias Petri, and Alistair Moffat. 2021. Anytime Ranking on Document-Ordered Indexes. ACM Transactions on Information Systems 40, 1, Article 13 (Sep 2021), 32 pages. [48] Joel Mackenzie, Andrew Trotman, and Jimmy Lin. 2021. Wacky Weights in Learned Sparse Representations and the Revenge of Score-at-a-Time Query Evaluation. arXiv:2110.11540 [cs.IR] [49] Joel Mackenzie, Andrew Trotman, and Jimmy Lin. 2022. Efficient Document-at-a-Time and Score-at-a-Time Query Evaluation for Learned Sparse Representations. ACM Transactions on Information Systems (Dec 2022). [50] Yu. A. Malkov and D. A. Yashunin. 2016. Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs. arXiv:1603.09320 [cs.DS]
2309.09013#111
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
112
[51] Antonio Mallia, Omar Khattab, Torsten Suel, and Nicola Tonellotto. 2021. Learning Passage Impacts for Inverted Indexes. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (Virtual Event, Canada). 1723–1727. [52] Antonio Mallia, Joel Mackenzie, Torsten Suel, and Nicola Tonellotto. 2022. Faster Learned Sparse Retrieval with Guided Traversal. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (Madrid, Spain). 1901–1905. [53] Antonio Mallia, Giuseppe Ottaviano, Elia Porciani, Nicola Tonellotto, and Rossano Venturini. 2017. Faster BlockMax WAND with Variable-Sized Blocks. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (Shinjuku, Tokyo, Japan). 625–634. [54] Antonio Mallia and Elia Porciani. 2019. Faster BlockMax WAND with Longer Skipping. In Advances in Information Retrieval. 771–778. [55] Stanislav Morozov and Artem Babenko. 2018. Non-metric Similarity Graphs for Maximum Inner Product Search. In Advances in Neural Information Processing Systems.
2309.09013#112
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
113
[55] Stanislav Morozov and Artem Babenko. 2018. Non-metric Similarity Graphs for Maximum Inner Product Search. In Advances in Neural Information Processing Systems. [56] Behnam Neyshabur and Nathan Srebro. 2015. On Symmetric and Asymmetric LSHs for Inner Product Search. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37 (Lille, France). 1926–1934. [57] Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. (November 2016). [58] Yuxin Peng, Xin Huang, and Yunzhen Zhao. 2018. An Overview of Cross-Media Retrieval: Concepts, Methodologies, IEEE Transactions on Circuits and Systems for Video Technology 28, 9 (Sep 2018), Benchmarks, and Challenges. 2372–2385. [59] Matthias Petri, Alistair Moffat, Joel Mackenzie, J. Shane Culpepper, and Daniel Beck. 2019. Accelerated Query Processing Via Similarity Score Prediction. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (Paris, France). 485–494.
2309.09013#113
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
114
[60] Giulio Ermanno Pibiri and Rossano Venturini. 2020. Techniques for Inverted Index Compression. ACM Comput. Surv. 53, 6, Article 125 (dec 2020), 36 pages. [61] Rameshwar Pratap, Debajyoti Bera, and Karthik Revanuru. 2019. Efficient Sketching Algorithm for Sparse Binary Data. In 2019 IEEE International Conference on Data Mining (ICDM). 508–517. [62] Stephen E. Robertson, Steve Walker, Susan Jones, Micheline Hancock-Beaulieu, and Mike Gatford. 1994. Okapi at TREC-3.. In TREC (NIST Special Publication, Vol. 500-225), Donna K. Harman (Ed.). National Institute of Standards and Technology (NIST), 109–126. [63] Anshumali Shrivastava and Ping Li. 2014. Asymmetric LSH (ALSH) for Sublinear Time Maximum Inner Product Search (MIPS). In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2 (Montreal, Canada). MIT Press, Cambridge, MA, USA, 2321–2329.
2309.09013#114
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
115
[64] Y. Song, Y. Gu, R. Zhang, and G. Yu. 2021. ProMIPS: Efficient High-Dimensional c-Approximate Maximum Inner Product Search with a Lightweight Index. In 2021 IEEE 37th International Conference on Data Engineering (ICDE). Los Alamitos, CA, USA, 1619–1630. [65] Shulong Tan, Zhaozhuo Xu, Weijie Zhao, Hongliang Fei, Zhixin Zhou, and Ping Li. 2021. Norm Adjusted Proximity Graph for Fast Inner Product Retrieval. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & # Bridging Dense and Sparse Maximum Inner Product Search # Data Mining (Virtual Event, Singapore). 1552–1560. [66] Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). [67] Mo Tiwari, Ryan Kang, Je-Yong Lee, Donghyun Lee, Chris Piech, Sebastian Thrun, Ilan Shomorony, and Martin Jinye
2309.09013#115
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
116
Zhang. 2023. Faster Maximum Inner Product Search in High Dimensions. arXiv:2212.07551 [cs.LG] [68] Nicola Tonellotto, Craig Macdonald, and Iadh Ounis. 2018. Efficient Query Processing for Scalable Web Search. Foundations and Trends in Information Retrieval 12, 4–5 (Dec 2018), 319–500. [69] Howard Turtle and James Flood. 1995. Query Evaluation: Strategies and Optimizations. Information Processing and Management 31, 6 (November 1995), 831–850. [70] Bhisham Dev Verma, Rameshwar Pratap, and Debajyoti Bera. 2022. Efficient Binary Embedding of Categorical Data using BinSketch. Data Mining and Knowledge Discovery 36 (2022), 537–565.
2309.09013#116
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
117
using BinSketch. Data Mining and Knowledge Discovery 36 (2022), 537–565. [71] Mengzhao Wang, Xiaoliang Xu, Qiang Yue, and Yuxiang Wang. 2021. A Comprehensive Survey and Experimental Comparison of Graph-Based Approximate Nearest Neighbor Search. Proc. VLDB Endow. 14, 11 (jul 2021), 1964–1978. [72] Shuai Wang, Shengyao Zhuang, and Guido Zuccon. 2021. BERT-Based Dense Retrievers Require Interpolation with BM25 for Effective Passage Retrieval. In Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval (Virtual Event, Canada). 317–324. [73] David P. Woodruff. 2014. Sketching as a Tool for Numerical Linear Algebra. Foundations and Trends in Theoretical Computer Science 10, 1–2 (Oct 2014), 1–157. [74] Xiang Wu, Ruiqi Guo, Sanjiv Kumar, and David Simcha. 2019. Local Orthogonal Decomposition for Maximum Inner Product Search. arXiv:1903.10391 [cs.LG] [75] Xiang Wu, Ruiqi Guo, David Simcha, Dave Dopson, and Sanjiv Kumar. 2019. Efficient Inner Product Approximation in
2309.09013#117
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
118
[75] Xiang Wu, Ruiqi Guo, David Simcha, Dave Dopson, and Sanjiv Kumar. 2019. Efficient Inner Product Approximation in Hybrid Spaces. (2019). arXiv:1903.08690 [cs.LG] [76] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. [77] Xiao Yan, Jinfeng Li, Xinyan Dai, Hongzhi Chen, and James Cheng. 2018. Norm-Ranging LSH for Maximum Inner Product Search. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (Montréal, Canada). 2956–2965.
2309.09013#118
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]
2309.09013
119
[78] Jheng-Hong Yang, Xueguang Ma, and Jimmy Lin. 2021. Sparsifying Sparse Representations for Passage Retrieval by Top-𝑘 Masking. arXiv:2112.09628 [cs.IR] [79] Hamed Zamani, Mostafa Dehghani, W. Bruce Croft, Erik Learned-Miller, and Jaap Kamps. 2018. From Neural Re- Ranking to Neural Ranking: Learning a Sparse Representation for Inverted Indexing. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management (Torino, Italy). 497–506. [80] Wengang Zhou, Houqiang Li, and Qi Tian. 2017. Recent Advance in Content-based Image Retrieval: A Literature Survey. arXiv:1706.06064 [cs.MM] [81] Zhixin Zhou, Shulong Tan, Zhaozhuo Xu, and Ping Li. 2019. Möbius Transformation for Fast Inner Product Search on Graph. [82] Shengyao Zhuang and Guido Zuccon. 2022. Fast Passage Re-ranking with Contextualized Exact Term Matching and Efficient Passage Expansion. In Workshop on Reaching Efficiency in Neural Information Retrieval, the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval.
2309.09013#119
Bridging Dense and Sparse Maximum Inner Product Search
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
http://arxiv.org/pdf/2309.09013
Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
cs.IR
null
null
cs.IR
20230916
20230916
[ { "id": "2104.05740" }, { "id": "1909.13459" }, { "id": "2110.11540" }, { "id": "2106.14807" }, { "id": "1903.10391" }, { "id": "1507.05910" }, { "id": "2112.02179" }, { "id": "2212.07551" }, { "id": "2112.09628" }, { "id": "1603.09320" }, { "id": "2010.06467" }, { "id": "1706.06064" }, { "id": "1903.08690" }, { "id": "2010.01195" } ]