id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2309.09013#65
Bridging Dense and Sparse Maximum Inner Product Search
candidates from their partition of the index. This section proposes a novel alternative for R that is based on the insight that clustering documents for IVF-based search and dynamic pruning algorithms in the inverted index-based top-ð retrieval literature are intimately connected. 6.1 Partitioning Inverted Lists Consider an optimal partitioning Pâ of a collection X of sparse vectors into ð clusters with a set of representative points Câ . In the context of MIPS, optimality implies that for any given sparse query ð , we have that the solution to Cð = arg maxð â Câ â ¨ð , ð ð â © represents the partition Pð in which we can find the maximizer of arg maxð ¥ â X â ¨ð , ð ¥â ©.
2309.09013#64
2309.09013#66
2309.09013
[ "2104.05740" ]
2309.09013#66
Bridging Dense and Sparse Maximum Inner Product Search
That implies that, when performing MIPS for a given query, we dynamically prune the set of documents in X \ Pð ; the procedure is dynamic because Pð depends on the query vector. Consider now an inverted index that represents X. Typically, its inverted lists are sorted either by document identifiers or by the â impactâ of each document on the final inner product score [68]. The former is consequential for compression [60] and document-at-a-time dynamic pruning algo- rithms [68], while the latter provides an opportunity for early-termination of score computationâ we reiterate that, all of these techniques work only on non-negative vectors or that their extension to negative vectors in non-trivial.
2309.09013#65
2309.09013#67
2309.09013
[ "2104.05740" ]
2309.09013#67
Bridging Dense and Sparse Maximum Inner Product Search
But, as we explain, Pâ induces another organization of inverted lists that will enable fast, approximate retrieval in the context of Algorithm 2 for general sparse vectors. Our construction, detailed in Algorithm 3, is straightforward. At a high level, when forming an inverted list for a coordinate ð ¡, we simply iterate through partitions and add vectors from that partition whose coordinate ð ¡ is non-zero to the inverted list. As we do so, for each inverted list, we record the offsets within the list of each partition in a separate skip list. Together the two structures enable us to traverse the inverted lists by only evaluating documents in a given set of partitions. An alternative way of viewing the joint inverted and skip lists is to think of each inverted list as a set of variable-length segments or blocks, where documents within each block are grouped according to a clustering algorithm. Before we demonstrate the retrieval logic, we must remark on the space complexity of the resulting structure. There are two factors to comment on. First, sorting the inverted lists by partition identifier rather than document identifier may lead to suboptimality for compression algorithms. That is because, the new arrangement of documents may distort the ð -gaps (i.e., the difference
2309.09013#66
2309.09013#68
2309.09013
[ "2104.05740" ]
2309.09013#68
Bridging Dense and Sparse Maximum Inner Product Search
# Bridging Dense and Sparse Maximum Inner Product Search Algorithm 4: Query processing over partitioned inverted lists Input: Inverted index, I; Skip list, S obtained from Algorithm 3; Sparse query vector, ð ; Set of partitions to probe, PI from Algorithm 2. Result: Top ð vectors. 1: scores â â ; 2: for ð ¡ â nz(ð ) do 3: 4: 5: â ² A mapping from documents to scores SLPosition â 0 ; for Pð â PI do â ² Pointer into the skip list S [ð ¡] Advance SLPosition until partition of S [ð ¡] [SLPosition] matches Pð begin â S [ð ¡] [SLPosition].Offset end â S [ð ¡] [SLPosition + 1].Offset for (docid, value) â I [ð ¡] [begin . . . end] do 6: 7: 8: 9: 10: 11: 12: end for 13: return Top ð documents given scores scores[docid] â scores[docid] + ð ð ¡ Ã value end for end for between two consecutive document identifiers in an inverted list); compression algorithms perform better when ð -gaps are smaller and when there is a run of the same ð -gap in the list. But we can address that concern trivially through document identifier reassignment: After partitioning is done by Algorithm 1, we assign new identifiers to documents such that documents within a partition have consecutive identifiers. The second factor is the additional data stored in S. In the worst case, each inverted list will have documents from every partition. That entails that each S [ð ¡] records ð
2309.09013#67
2309.09013#69
2309.09013
[ "2104.05740" ]
2309.09013#69
Bridging Dense and Sparse Maximum Inner Product Search
additional pairs of integers consisting of partition identifier and the offset within the inverted list where that partition begins. As such, in the worst case, the inverted index is inflated by the size of storing 2ð ð integers. However, given that ð is orders of magnitude smaller than the total number of non-zero coordinates in the collection, and as such 2ð ð â ª ð |X|, the increase to the total size of the inverted index is mild at worst. Moreover, skip lists can be further compressed using an integer or integer-list codec. 6.2 Query Processing over Partitioned Inverted Lists When Algorithm 2 gives us a set of partitions Pz to probe, we use a simple coordinate-at-a-time scheme to compute the scores of documents in () Py and return the top-k vectors.
2309.09013#68
2309.09013#70
2309.09013
[ "2104.05740" ]
2309.09013#70
Bridging Dense and Sparse Maximum Inner Product Search
When processing coordinate ð ¡ and accumulating partial inner product scores, we have two operations to perform. First, we must take the intersection of the skip list and the list of whitelisted partitions: PI â © S [ð ¡].PartitionId (where the operator PartitionId returns the partition identifier of every element in the skip list). Only then do we traverse the inverted list I [ð ¡] by looking at the offsets of partitions in the intersection set. One possible instance of this procedure is described in Algorithm 4. 6.3 Empirical Evaluation There are four key properties that we wish to evaluate. Naturally, we care about the efficiency of Algorithms 3 and 4 when we use them as R in Algorithm 2. But, seeing as the partitioning performed by Algorithm 1 is not guaranteed to be the optimal partitioning Pâ , we understand there is a risk of losing retrieval accuracy by probing a fraction of partitions, as demonstrated in
2309.09013#69
2309.09013#71
2309.09013
[ "2104.05740" ]
2309.09013#71
Bridging Dense and Sparse Maximum Inner Product Search
111:21 111:21 111:22 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty Section 5. As such, the second important property is the effectiveness of the methods presented here. We thus report throughput versus accuracy as one trade-off space of interest. We also presented Algorithms 3 and 4 as a new dynamic pruning method for the inverted index. To show that for different levels of accuracy, we indeed prune the inverted lists, we additionally report the size of the pruned space as we process queries. A third factor is the size of the inverted index and the inflation due to (a) the additional data structure that holds skip pointers and (b) the partition centroids produced by Algorithm 1. We also evaluate this aspect, but we do not apply compression anywhere in our evaluation: We consider compression to be orthogonal to this work and only report the overhead. Finally, we implemented Algorithms 1 through 4 by enabling parallelism within and across queries. We believe, therefore, it is important to measure the effect of the number of CPU cores on throughput. As such, we present throughput measurements by changing the number of cores we make available to the algorithms.
2309.09013#70
2309.09013#72
2309.09013
[ "2104.05740" ]
2309.09013#72
Bridging Dense and Sparse Maximum Inner Product Search
6.3.1 Baseline Retrieval Algorithm. As argued earlier, we are interested in general sparse vectors, such as those produced by Splade, which exhibit distributional properties that differ from traditional sparse vectors based on lexical models of relevance. It has been noted by others [16, 48] that an exhaustive disjunctive query processing over the inverted indexâ a method Bruch et al. referred to as LinScanâ outpeforms all dynamic pruning-based optimization methods and represents a strong baseline. We therefore use LinScan as our baseline system. LinScan is a safe algorithm as it evaluates every qualified document (i.e., documents that contain at least one non-zero coordinate of the query vector). But as Bruch et al. show in [16], there is a simple strategy to turn LinScan into an approximate algorithm: By giving the algorithm a time budget, we can ask it to process as many coordinates as possible until the budget has been exhausted. At that point, LinScan returns the approximate top-ð set according to the accumulated partial inner product scores. We use this variant to obtain approximate top-ð sets for comparison with our own approximate algorithms. 6.3.2 Throughput versus Accuracy. The first topic of evaluation is the trade-off between throughput and accuracy. We can trade one factor off for the other by adjusting the parameter â in Algorithm 2: A smaller â will result in probing fewer partitions, which in turn leads to faster retrieval but lower quality. Letting â approach the size of the collection, on the other hand, results in the algorithm probing every partition, leading to a slower but higher-quality retrieval. We tune this knob as we perform top-10 retrieval over our datasets. We use Splade and Efficient Splade vectors as input to the algorithms, sketch them using the JL and Weak Sinnamon transforms, but partition the data only using spherical KMeans. The results of our experiments are shown in Figures 6 and 7. In order to digest the trends, we must recall that the throughput of our retrieval method is affected by two factors: the time it takes to perform inner product of a query vector with cluster centroids, and the time it takes to execute algorithm R on the subset of partitions identified from the previous step. In the low-recall regime, we expect the first factor to make up the bulk of the processing time, while in the high-recall regime the cost of executing R starts to dominate the overall processing time.
2309.09013#71
2309.09013#73
2309.09013
[ "2104.05740" ]
2309.09013#73
Bridging Dense and Sparse Maximum Inner Product Search
That phenomenon is evident in the figures for both Splade and Efficient Splade experiments. That also explains why when sketching is done with Weak Sinnamon, throughput is much better than the JL transform: Weak Sinnamon creates sparse query sketches which lead to faster inner product computation with partition centroids. What is also clear from our experiments is that our approximate method always compares favorably to the approximate baseline. In fact, for the same desired accuracy, our method often # Bridging Dense and Sparse Maximum Inner Product Search
2309.09013#72
2309.09013#74
2309.09013
[ "2104.05740" ]
2309.09013#74
Bridging Dense and Sparse Maximum Inner Product Search
(a) MS Marco (b) NQ (c) Quora # (d) HotpotQA # (e) Fever # (f) DBPedia Fig. 6. Throughput (as queries per second) versus top-10 retrieval accuracy on Splade-encoded datasets. We limit the experiments to an instance of Algorithm 1 that uses spherical KMeans. Included here is an approximate variant of an exhaustive disjunctive query processor (LinScan). We use 20 CPU cores and repeat each experiment 10 times for a more reliable throughput measurement.
2309.09013#73
2309.09013#75
2309.09013
[ "2104.05740" ]
2309.09013#75
Bridging Dense and Sparse Maximum Inner Product Search
Axes are not consistent across figures. reaches a throughput that is orders of magnitude larger than that of the baselineâ s. For instance, on MS Marco encoded with Splade, an instance of our algorithm that operates on Weak Sinnamon 111:23 111:24 111:24 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty (a) MS Marco (b) NQ (c) Quora (d) HotpotQA (e) Fever (f) DBPedia Fig. 7.
2309.09013#74
2309.09013#76
2309.09013
[ "2104.05740" ]
2309.09013#76
Bridging Dense and Sparse Maximum Inner Product Search
Throughput vs. top-10 retrieval accuracy on Efficient Splade-encoded datasets. Setup is as in Figure 6. Fig. 7. Throughput vs. top-10 retrieval accuracy on EFFICIENT SPLADE-encoded datasets. Setup is as in Figure 6. sketches processes queries at an extrapolated rate of approximately 2,000 queries per second and delivers 90% accuracy, while the baseline method yields a throughput of roughly 150 queries per second. At lower recalls, the gap is substantially wider. As we require a higher accuracy, all methods become slower. Ultimately, of course, if we set â too high, our algorithms become slower than the exact baseline. That is because, our approximate # Bridging Dense and Sparse Maximum Inner Product Search (a) Splade (b) Efficient Splade
2309.09013#75
2309.09013#77
2309.09013
[ "2104.05740" ]
2309.09013#77
Bridging Dense and Sparse Maximum Inner Product Search
Fig. 8. Percentage of qualified documents (i.e., documents that contain at least one non-zero coordinate of the query) pruned versus top-10 accuracy for the MS Marco dataset. In this setup, Algorithm 1 uses Weak Sinnamon along with spherical KMeans for partitioning. Note the irregular spacing of the horizontal axes. algorithms have to pay the price of computing inner product with centroids and must execute the additional step of intersecting PI with the skip lists. We do not show this empirically, however. 6.3.3 Effect of Dynamic Pruning. As we already explained, when we adjust the parameter â in Algorithm 2, we control the number of documents the sub-algorithm R is allowed to evaluate. While we studied the impact of â on efficiency as measured by throughput, here we wish to understand its effect in terms of the amount of pruning it induces. While throughput measurements depend on our specific implementation of Algorithm 4, measuring the portion of documents pruned is implementation-agnostic and, as such, serves as a more definitive measure of efficiency. To that end, we count, for each query, the actual number of documents evaluated by Algorithm 4 as we gradually increase â
2309.09013#76
2309.09013#78
2309.09013
[ "2104.05740" ]
2309.09013#78
Bridging Dense and Sparse Maximum Inner Product Search
. We plot this quantity in Figure 8 for MS Marco from a configuration of our algorithms that uses Weak Sinnamon and spherical KMeans. To improve visualization, we show not raw counts, but the percentage of qualified documentsâ defined, once again, as the number of documents that contain at least one non-zero coordinate of the queryâ that Algorithm 4 evaluates. That is indicative of how much of the inverted lists the algorithm manages to skip. As one observes, in the low-recall region, the algorithm probes only a fraction of the inverted lists. On Splade dataset, the algorithm reaches a top-10 accuracy of 0.94 by merely evaluating, on average, about 10% of the total number of documents in the inverted lists. On Efficient Splade, as expected, the algorithm is relatively less effective. These results are encouraging. It shows the potential that a clustering-based organization of the inverted index has for dynamic pruning in approximate MIPS. Importantly, this method does not require the vectors to follow certain distributions or be non-negative. Index Size Overhead. As we mentioned earlier, our algorithms add overhead to the index 6.3.4 structure required for query processing. If our reference point is the LinScan algorithm with a basic (uncompressed) inverted index, our methods introduce two additional structures: (a) the skip
2309.09013#77
2309.09013#79
2309.09013
[ "2104.05740" ]
2309.09013#79
Bridging Dense and Sparse Maximum Inner Product Search
111:25 111:25 111:26 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty Table 2. Index sizes in GB. The index in LinScan is made up of an inverted index with document identifiers and floating point values (uncompressed). The index in our method stores 4â ï¸ |X| centroids from the application of spherical KMeans to Weak Sinnamon for dataset X, an inverted index with the same size as LinScan, and the skip list structure S. Method MS Marco NQ Quora HotpotQA Fever DBPedia Splade LinScan Ours 8.4 9.0(+7%) 3.1 3.43(+10%) 0.27 0.32(+18%) 5.1 5.5(+8%) 5.9 6.3(+7%) 4.7 5.0(+6%) E. Splade LinScan Ours 12 13(+8%) 4.2 4.7(+12%) 0.27 0.37(+37%) 4.9 5.4(+10%) 5.7 6.2(+9%) 4.6 5.0(+9%)
2309.09013#78
2309.09013#80
2309.09013
[ "2104.05740" ]
2309.09013#80
Bridging Dense and Sparse Maximum Inner Product Search
list, S, in Algorithm 3; and, (b) the array of 4â ï¸ |X| centroids produced by Algorithm 1. We next measure this overhead. We report our findings in Table 2 for Splade and Efficient Splade vector datasets, measured in GB of space after serialization to disk. We reiterate that, we do not apply compression to the index. That is because there is an array of compression techniques that can be applied to the different parts of the data structure (such as quantization, approximation, and ð -gap compression). Choosing any of those would arbitrarily conflate the inflation due to the overhead and the compression rate. We observe that the overhead of our method on larger datasets is relatively mild. The increase in size ranges from 6% to 10% (Quora excluded) for the Splade-encoded datasets and a slightly wider and large range for Efficient Splade-encoded datasets. 6.3.5 Effect of Parallelism. We conclude the empirical evaluation of our approximate algorithm by repeating the throughput-accuracy experiments with a different number of CPUs. In our imple- mentation, we take advantage of access to multiple processors by parallelizing the computation of inner product between queries and centroids (in Algorithm 2) for each query, in addition to distributing the queries themselves to the available CPUs. As a result of this concurrent paradigm, we expect that, by reducing the number of CPUs available to the algorithm, throughput will be more heavily affected in low-recall regions (when â
2309.09013#79
2309.09013#81
2309.09013
[ "2104.05740" ]
2309.09013#81
Bridging Dense and Sparse Maximum Inner Product Search
is small). Figure 9 shows the results of these experiments on the Splade- and Efficient Splade-encoded MS Marco dataset. The figures only include a configuration of our algorithms with spherical KMeans and Weak Sinnamon. It is easy to confirm that our hypothesis from above holds: In low-recall regions where computation is heavily dominated by the cost of computing inner product with centroids, throughput decreases considerably as we reduce the number of CPUs. 7 TOWARDS A UNIFIED FRAMEWORK FOR MIPS Sections 4 through 6 presented a complete instance of Algorithm 2 for IVF-based MIPS over sparse vectors. But, recall that, we borrowed the idea of IVF-based search from the dense MIPS literature.
2309.09013#80
2309.09013#82
2309.09013
[ "2104.05740" ]
2309.09013#82
Bridging Dense and Sparse Maximum Inner Product Search
So it is only natural to pose the following question: Now that we have an arbitrarily-accurate IVF algorithm for sparse vectors, can we extend it to hybrid vectors in Rð +ð ? In this section, we unpack that question superficially and investigate possible directions at a high level to explore the feasibility and benefits of such an approach. First, however, let us motivate this question. 7.1 Motivation We described the changing landscape of retrieval in Section 1. From lexical-semantic search to multi-modal retrieval, for many emerging applications the ability to conduct MIPS over hybrid vectors efficiently and effectively is a requisite. One viable approach to searching over a collection # Bridging Dense and Sparse Maximum Inner Product Search (a) Splade (b) Efficient Splade
2309.09013#81
2309.09013#83
2309.09013
[ "2104.05740" ]
2309.09013#83
Bridging Dense and Sparse Maximum Inner Product Search
Fig. 9. Effect of changing the number of CPUs on throughput. The figures illustrate these measurements for MS Marco, and a particular configuration of our algorithm that uses spherical KMeans over Weak Sinnamon sketches. We include LinScan executed on 20 CPUs from Figure 6 and 7 as a point of reference. of hybrid vectors X is to simply decompose the process into separate MIPS questions, one over the dense subspace Xð and the other over the sparse one Xð , followed by an aggregation of the retrieved sets. Indeed this approach has become the de facto solution for hybrid vector retrieval [12, 17]. The two-stage retrieval system works as follows: When a hybrid query vector ð
2309.09013#82
2309.09013#84
2309.09013
[ "2104.05740" ]
2309.09013#84
Bridging Dense and Sparse Maximum Inner Product Search
â Rð +ð arrives and the retrieval system is expected to return the top ð documents, commonly, ð ð is sent to the dense MIPS system with a request for the top ð â ² â ¥ ð vectors, and ð ð to the sparse retrieval component with a similar request. Documents in the union of the two sets are subsequently scored and reranked Ë S: to produce an approximate set of top-ð vectors, Ë S = (ð ) arg max ð ¥ â Sð â ªSð â ¨ð , ð ¥â ©, (10) Sð = (ð â ² ) arg max ð ¥ â X â ¨ð ð , ð ¥ð â © and, Sð = (ð â ² ) arg max ð ¥ â X â ¨ð ð , ð ¥ð â ©. (11)
2309.09013#83
2309.09013#85
2309.09013
[ "2104.05740" ]
2309.09013#85
Bridging Dense and Sparse Maximum Inner Product Search
Let us set aside the effectiveness of the setup above for a moment and consider its complexity from a systems standpoint. It is clear that, both for researchers and practitioners, studying and creating 111:27 111:27 111:28 111:28 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty 1.0: en 0.9: 508 <s = 50.7 = S06 a â Weense = 0.2 Se 05 == -waense = 0.4 wWeense = 0.5 Wdense = 0.6 â wWaense = 0.8 20 40 60 80, 100 ki Fig. 10. Top-10 accuracy of the two-stage retrieval system for hybrid vectors. We retrieve ð â
2309.09013#84
2309.09013#86
2309.09013
[ "2104.05740" ]
2309.09013#86
Bridging Dense and Sparse Maximum Inner Product Search
² candidates from each sub-system and rerank them to find the top-10 set. We prepare the hybrid vectors by first normalizing the dense and sparse parts separately, then constructing query vectors as follows: ð = ð ¤denseð ð + (1 â ð ¤dense)ð ð , where ð ð and ð ð are sampled from the data distribution. In effect, ð ¤dense shifts the â 2 mass from the sparse to the dense subspace, giving more importance to one subspace over the other during retrieval. two disconnected, incompatible systems adds unwanted costs. For example, systems developers must take care to keep all documents in sync between the two indexes at all times. Reasoning about the (mis)behavior of the retrieval system, as another example, requires investigating one layer of indirection and understanding the processes leading to two separate retrieved sets. These collectively pose a challenge to systems researchers, and add difficulty to operations in production. Furthermore, it is easy to see that the least scalable of the two systems dictates or shapes the overall latency and throughput capacity. Even if we accepted the cost of studying two separate systems or deemed it negligible, and further decided scalability is not a concern, it is not difficult to show that such a heterogeneous design may prove wasteful or outright ineffective in the general case. More concretely, depending on how the â 2 mass of the query and document vectors is split between the dense subspace and the sparse subspace, the two sub-systems involved may have to resort to a large ð â ² in order to ensure an accurate final retrieved set at rank ð .
2309.09013#85
2309.09013#87
2309.09013
[ "2104.05740" ]
2309.09013#87
Bridging Dense and Sparse Maximum Inner Product Search
While the phenomenon above is provable, we demonstrate its effect by a simple (though contrived) experiment. We generate a collection of 100,000 documents and 1,000 queries. Each vector is a hybrid of a dense and a sparse vector. The dense vectors are in R64, with each coordinate drawing its value from the exponential distribution (with scale 0.5). The sparse vectors are in R1000 with an average of ð = 16 non-zero coordinates, where non-zero values are drawn from the exponential distribution (scale 0.5). We use different seeds for the pseudo-random generator when creating document and query vectors.
2309.09013#86
2309.09013#88
2309.09013
[ "2104.05740" ]
2309.09013#88
Bridging Dense and Sparse Maximum Inner Product Search
In order to study how the ratio of â 2 mass between dense and sparse subspaces affects retrieval quality, we first normalize the generated dense and sparse vectors separately. During retrieval, we amplify the dense part of the query vector by a weight between 0 and 1 and multiply the sparse part by one minus that weight. In the end, we are performing retrieval for a query vector ð that can be written as ð ¤denseð ð + (1 â ð ¤dense)ð ð . By letting ð ¤dense sweep the unit interval, we simulate a shift of the â 2 mass of the hybrid vector from the sparse to the dense subspace. Over the generated collection, we conduct exact retrieval using exhaustive search and obtain the top ð = 10 vectors for each query by maximizing the inner product. We then use the two-stage # Bridging Dense and Sparse Maximum Inner Product Search Algorithm 5: Indexing of hybrid vectors Input: Collection X of hybrid vectors in Rð +ð ; Number of clusters, ð ; Random projector, ð : Rð â Rð where ð â ª ð
2309.09013#87
2309.09013#89
2309.09013
[ "2104.05740" ]
2309.09013#89
Bridging Dense and Sparse Maximum Inner Product Search
; Clustering algorithm Cluster that returns partitions of input data and their representatives. Result: Cluster assignments Pð = { ð | ð ¥ ( ð ) â Partition ð } and cluster representatives Cð â s. Ë X â {ð ¥ð â ð (ð ¥ð ) | ð ¥ð â ð ¥ð â X} 1: 2: Partitions, Representatives â Cluster( Ë X; ð ) 3: Pð â { ð | Ë ð ¥ ( ð ) â Partitions[ð ]}, â 1 â ¤ ð â ¤ ð 4: Cð â Representatives[ð ], â 1 â ¤ ð â ¤ ð
2309.09013#88
2309.09013#90
2309.09013
[ "2104.05740" ]
2309.09013#90
Bridging Dense and Sparse Maximum Inner Product Search
5: return P and C Algorithm 6: Retrieval of hybrid vectors Input: Hybrid query vector, ð â Rð +ð ; Clusters and representatives, P, C obtained from Algorithm 5; random projector ð : Rð â Rð ; Number of data points to examine, â â ¤ |X| where |X| denotes the size of the collection; hybrid MIPS sub-algorithm R. Result: Approximate set of top ð vectors that maximize inner product with ð
2309.09013#89
2309.09013#91
2309.09013
[ "2104.05740" ]
2309.09013#91
Bridging Dense and Sparse Maximum Inner Product Search
. 1: 2: SortedClusters â SortDescending(P by â ¨ Ë ð , Cð â ©) 3: TotalSize â 0 4: I â â ; 5: for Pð ð â SortedClusters do 6: 7: 8: 9: end for 10: return Top ð vectors from partitions PI â {Pð | ð â I} w.r.t â ¨ð , ·⠩ using R design by asking each sub-system to return the (exact) top ð â ² vectors for ð â ² â [100], and reranking the union set to obtain the final top ð = 10 documents. We then measure the top-ð accuracy of the two-stage architecture. Figure 10 plots accuracy versus ð
2309.09013#90
2309.09013#92
2309.09013
[ "2104.05740" ]
2309.09013#92
Bridging Dense and Sparse Maximum Inner Product Search
â ² for different values of ð ¤dense. It is easy to see that, as one subspace becomes more important than the other, the retrieval quality too changes. Importantly, a larger ð â ² is often required to attain a high accuracy. The factors identified in this sectionâ systems complexity, scalability bottleneck, and the sub- optimality of retrieval qualityâ nudge us in the direction of a unified framework for MIPS. 7.2 IVF MIPS for Hybrid Vectors We present a simple extension of the IVF indexing and retrieval duo of Algorithms 1 and 2 to generalize the logic to hybrid vectors. This is shown in Algorithms 5 and 6, where the only two differences with the original algorithms are that (a) sketching is applied only to the sparse portion of vectors to form new vectors in Rð
2309.09013#91
2309.09013#93
2309.09013
[ "2104.05740" ]
2309.09013#93
Bridging Dense and Sparse Maximum Inner Product Search
+ð instead of Rð +ð , and (b) that the sub-algorithm R is assumed to carry out top-ð retrieval over hybrid vectors from a given set of partitions. In this section, we only verify the viability of the extended algorithms and leave an in-depth investigation of the proposal to future work. As such, we use exhaustive search as the sub-algorithm 111:29 111:30 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty
2309.09013#92
2309.09013#94
2309.09013
[ "2104.05740" ]
2309.09013#94
Bridging Dense and Sparse Maximum Inner Product Search
R and acknowledge that any observations made using such an algorithm only speaks to the effectiveness of the method and not its efficiency. 7.3 Empirical Evaluation Let us repeat the experiment from Section 7.1 on synthetic vectors and compare the two-stage retrieval process with the unified framework in terms of retrieval accuracy. To that end, we design the following protocol. First, we perform exact MIPS using exhaustive search over the hybrid collection of vectors. The set of top-ð documents obtained in this way make up the ground-truth for each query. Next, we consider the two-stage system. We retrieve through exhaustive search the exact set of top-ð
2309.09013#93
2309.09013#95
2309.09013
[ "2104.05740" ]
2309.09013#95
Bridging Dense and Sparse Maximum Inner Product Search
â ² (for a large ð â ²) documents according to their sparse inner product, and another (possibly overlapping) set by their dense inner product. From the two ranked lists, we accumulate enough documents from the top such that the size of the resulting set is roughly equal to ð . In this way, we can measure the top-ð accuracy of the two-stage system against the ground-truth. Finally, we turn to the unified framework. We use the JL transform to reduce the dimensionality of sparse vectors, and spherical KMeans to partition the vectors. We then proceed as usual and measure top-ð accuracy for different values of â
2309.09013#94
2309.09013#96
2309.09013
[ "2104.05740" ]
2309.09013#96
Bridging Dense and Sparse Maximum Inner Product Search
. From these experiments, we wish to understand whether and when the accuracy of the unified framework exceeds the accuracy of the two-stage setup. If the unified system is able to surpass the accuracy of the two-stage system by examining a relatively small portion of the collectionâ a quantity controlled through â â then that is indicative of the viability of the proposal. Indeed, as Figure 11 shows, the unified system almost always reaches a top-10 accuracy that is higher than the two-stage systemâ s by evaluating less than 2% of the collection. 8 DISCUSSION AND CONCLUSION We began this research with a simple question:
2309.09013#95
2309.09013#97
2309.09013
[ "2104.05740" ]
2309.09013#97
Bridging Dense and Sparse Maximum Inner Product Search
Can we apply dense MIPS algorithms to sparse vectors? That led us to investigate different dimensionality reduction techniques for sparse vectors as a way to contain the curse of dimensionality. We showed, for example, that the JL transform and Sinnamon behave differently on sparse vectors and can preserve inner product to different degrees. We also thoroughly evaluated the effect of clustering on sparse MIPS in the context of an IVF-based retrieval system. Coupling dimensionality reduction with clustering realized an effective IVF system for sparse vectors, summarized in Algorithms 1 and 2.
2309.09013#96
2309.09013#98
2309.09013
[ "2104.05740" ]
2309.09013#98
Bridging Dense and Sparse Maximum Inner Product Search
The protocol is easy to describe and is as follows. We sketch sparse vectors into a lower- dimensional (dense or sparse) subspace in a first step. We then apply clustering to the sketches and partition the data into a predetermined number of clusters, each identified by a representative (e.g., a centroid). When the system is presented with a query, we sketch the query (asymmetrically) and identify the top partitions by taking inner product between the query and cluster representatives. We then execute a secondary sub-algorithm to perform MIPS on the restricted subset of document vectors. In our presentation of the material above, we observed a strong, natural connection between clustering for IVF and dynamic pruning methods for inverted indexes. We developed that insight into an inverted index-based algorithm that could serve as the sub-algorithm in the above search procedure. Importantly, the algorithm organizes documents within an inverted list by partition identifierâ rather than the conventional arrangement by document identifier or impact score. Such an organization, coupled with skip pointers, enables the algorithm to only search over the subset of documents that belong to the top partitions determined by the IVF method. Crucially, the algorithm is agnostic to the vector distribution and admits real-valued vectors. # Bridging Dense and Sparse Maximum Inner Product Search
2309.09013#97
2309.09013#99
2309.09013
[ "2104.05740" ]
2309.09013#99
Bridging Dense and Sparse Maximum Inner Product Search
(a) ð ¤dense = 0.2 (b) ð ¤dense = 0.5 (c) ð ¤dense = 0.8 Fig. 11. Top-10 accuracy over hybrid vectors as a function of the percentage of documents probed. ð ¤dense controls how much of the â 2 mass of a hybrid vector is concentrated in its dense subspace. We also plot the performance of the two-stage system where each system returns the set of top-ð
2309.09013#98
2309.09013#100
2309.09013
[ "2104.05740" ]
2309.09013#100
Bridging Dense and Sparse Maximum Inner Product Search
â ² documents according to sparse or dense inner product scores, such that the size of the union of the two sets is roughly ð . Finally, we discussed how our proposal leads to a unified retrieval framework for hybrid vectors. By sketching the sparse sub-vectors and constructing an IVF index for the transformed hybrid vectors, we showed that it is possible to achieve better recall than a two-stage system, where dense and sparse sub-vectors are handled separately. The added advantage of the unified approach is that its accuracy remains robust under different vector distributions, where the mass shifts from the dense to the sparse subspace. We limited our discussion of hybrid MIPS to synthetic vectors as we were only interested in the viability of this byproduct of our primary research question. We acknowledge that we have only scratched the surface of retrieval over hybrid vectors. There are a multitude of open questions within the unified regime that warrant further investigation, including many minor but practical aspects of the framework that we conveniently ignored in our high-level description. We leave those as future work. We believe our investigation of MIPS for sparse (and hybrid vectors) provides many opportunities for information retrieval researchers. One line of research most immediately affected by our proposal is sparse representation learning. Models such as Splade are not only competitive on in- and out-of-domain tasks, they also produce inherently-interpretable representations of textâ
2309.09013#99
2309.09013#101
2309.09013
[ "2104.05740" ]
2309.09013#101
Bridging Dense and Sparse Maximum Inner Product Search
a desirable behavior in many production systems. However, sparse embeddings have, by and large, been tailored to existing retrieval regimes. For example, Efficient Splade learns sparser queries for better latency. uniCoil [39] collapses term representations of Coil [26] to a scalar for compatibility with inverted indexes. We claim that our proposed regime is a step toward removing such constraints, enabling researchers to explore sparse representations without much restraint, leading to a potentially different behavior. As we observe in Figures 4 and 5, for example, Splade
2309.09013#100
2309.09013#102
2309.09013
[ "2104.05740" ]
2309.09013#102
Bridging Dense and Sparse Maximum Inner Product Search
111:31 111:31 111:32 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty vectors are more amenable to clustering than Efficient Splade, and may even prove more efficient within the new framework. That is good news as there is evidence suggesting that Splade is more effective than its other variant on out-of-domain data [38]. Another related area of research that can benefit from our proposed regime is multi-modal and multimedia retrieval. Because our framework is agnostic to the distribution of the hybrid vectors, it is entirely plausible to formulate the multi-modal problem as MIPS over hybrid vectors, especially when one of the modes involves textual data, is data that is partially sparse, or where one may need to engineer (sparse) features to augment dense embeddings. REFERENCES [1] Nir Ailon and Bernard Chazelle. 2006. Approximate Nearest Neighbors and the Fast Johnson-Lindenstrauss Transform. In Proceedings of the 38th Annual ACM Symposium on Theory of Computing (Seattle, WA, USA). 557â
2309.09013#101
2309.09013#103
2309.09013
[ "2104.05740" ]
2309.09013#103
Bridging Dense and Sparse Maximum Inner Product Search
563. [2] Nir Ailon and Bernard Chazelle. 2009. The Fast Johnsonâ Lindenstrauss Transform and Approximate Nearest Neighbors. SIAM J. Comput. 39, 1 (2009), 302â 322. [3] Nir Ailon and Edo Liberty. 2011. An Almost Optimal Unrestricted Fast Johnson-Lindenstrauss Transform. In Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms (San Francisco, California). 185â 191. [4] Nir Ailon and Edo Liberty. 2013. An Almost Optimal Unrestricted Fast Johnson-Lindenstrauss Transform. ACM Trans. Algorithms 9, 3, Article 21 (jun 2013), 12 pages. [5] David Arthur and Sergei Vassilvitskii. 2007. K-Means++:
2309.09013#102
2309.09013#104
2309.09013
[ "2104.05740" ]
2309.09013#104
Bridging Dense and Sparse Maximum Inner Product Search
The Advantages of Careful Seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms (New Orleans, Louisiana). 1027â 1035. [6] Nima Asadi. 2013. Multi-Stage Search Architectures for Streaming Documents. University of Maryland. [7] Nima Asadi and Jimmy Lin. 2013. Effectiveness/Efficiency Tradeoffs for Candidate Generation in Multi-Stage Retrieval Architectures. In Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval (Dublin, Ireland). 997â
2309.09013#103
2309.09013#105
2309.09013
[ "2104.05740" ]
2309.09013#105
Bridging Dense and Sparse Maximum Inner Product Search
1000. [8] Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, and Yoshua Bengio. 2015. Clustering is Efficient for Approximate Maximum Inner Product Search. arXiv:1507.05910 [cs.LG] [9] Yang Bai, Xiaoguang Li, Gang Wang, Chaoliang Zhang, Lifeng Shang, Jun Xu, Zhaowei Wang, Fangshan Wang, and Qun Liu. 2020.
2309.09013#104
2309.09013#106
2309.09013
[ "2104.05740" ]
2309.09013#106
Bridging Dense and Sparse Maximum Inner Product Search
SparTerm: Learning Term-based Sparse Representation for Fast Text Retrieval. [10] Richard Baraniuk, M Davenport, Ronald DeVore, and M Wakin. 2006. The Johnson-Lindenstrauss lemma meets Compressed Sensing. IEEE Transactions on Information Theory 52 (01 2006), 1289â 1306. [11] Andrei Z. Broder, David Carmel, Michael Herscovici, Aya Soffer, and Jason Zien. 2003.
2309.09013#105
2309.09013#107
2309.09013
[ "2104.05740" ]
2309.09013#107
Bridging Dense and Sparse Maximum Inner Product Search
Efficient Query Evaluation Using a Two-Level Retrieval Process. In Proceedings of the Twelfth International Conference on Information and Knowledge Management (New Orleans, LA, USA). 426â 434. [12] Sebastian Bruch, Siyu Gai, and Amir Ingber. 2023. An Analysis of Fusion Functions for Hybrid Retrieval. ACM Transactions on Information Systems 42, 1, Article 20 (August 2023), 35 pages. [13] Sebastian Bruch, Claudio Lucchese, and Franco Maria Nardini. 2022. ReNeuIR: Reaching Efficiency in Neural In- formation Retrieval. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (Madrid, Spain). 3462â 3465. [14] Sebastian Bruch, Claudio Lucchese, and Franco Maria Nardini. 2023.
2309.09013#106
2309.09013#108
2309.09013
[ "2104.05740" ]
2309.09013#108
Bridging Dense and Sparse Maximum Inner Product Search
Efficient and Effective Tree-based and Neural Learning to Rank. Foundations and Trends in Information Retrieval 17, 1 (2023), 1â 123. [15] Sebastian Bruch, Joel Mackenzie, Maria Maistro, and Franco Maria Nardini. 2023. ReNeuIR at SIGIR 2023: The Second Workshop on Reaching Efficiency in Neural Information Retrieval. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (Taipei, Taiwan). 3456â 3459.
2309.09013#107
2309.09013#109
2309.09013
[ "2104.05740" ]
2309.09013#109
Bridging Dense and Sparse Maximum Inner Product Search
[16] Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty. 2023. An Approximate Algorithm for Maximum Inner Product Search over Streaming Sparse Vectors. ACM Transactions on Information Systems (July 2023). Just Accepted. [17] Tao Chen, Mingyang Zhang, Jing Lu, Michael Bendersky, and Marc Najork. 2022. Out-of-Domain Semantics to the Rescue! Zero-Shot Hybrid Retrieval Models. In Advances in Information Retrieval: 44th European Conference on IR Research, ECIR 2022, Stavanger, Norway, April 10â 14, 2022, Proceedings, Part I (Stavanger, Norway). 95â 110. [18] Matt Crane, J. Shane Culpepper, Jimmy Lin, Joel Mackenzie, and Andrew Trotman. 2017.
2309.09013#108
2309.09013#110
2309.09013
[ "2104.05740" ]
2309.09013#110
Bridging Dense and Sparse Maximum Inner Product Search
A Comparison of Document- at-a-Time and Score-at-a-Time Query Evaluation. In Proceedings of the 10th ACM International Conference on Web Search and Data Mining (Cambridge, United Kingdom). 201â 210. [19] Zhuyun Dai and Jamie Callan. 2020. Context-Aware Term Weighting For First Stage Passage Retrieval. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (Virtual Event, China). 1533â
2309.09013#109
2309.09013#111
2309.09013
[ "2104.05740" ]
2309.09013#111
Bridging Dense and Sparse Maximum Inner Product Search
1536. # Bridging Dense and Sparse Maximum Inner Product Search [20] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 4171â 4186. [21] Inderjit S. Dhillon and Dharmendra S. Modha. 2001.
2309.09013#110
2309.09013#112
2309.09013
[ "2104.05740" ]
2309.09013#112
Bridging Dense and Sparse Maximum Inner Product Search
Concept Decompositions for Large Sparse Text Data Using Clustering. Machine Learning 42, 1 (01 January 2001), 143â 175. [22] Constantinos Dimopoulos, Sergey Nepomnyachiy, and Torsten Suel. 2013. Optimizing Top-k Document Retrieval Strategies for Block-Max Indexes. In Proceedings of the Sixth ACM International Conference on Web Search and Data Mining (Rome, Italy). 113â 122. [23] Shuai Ding and Torsten Suel. 2011.
2309.09013#111
2309.09013#113
2309.09013
[ "2104.05740" ]
2309.09013#113
Bridging Dense and Sparse Maximum Inner Product Search
Faster Top-k Document Retrieval Using Block-Max Indexes. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval (Beijing, China). 993â 1002. [24] Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and Stéphane Clinchant. 2022. From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (Madrid, Spain). 2353â 2359. [25] Thibault Formal, Benjamin Piwowarski, and Stéphane Clinchant. 2021.
2309.09013#112
2309.09013#114
2309.09013
[ "2104.05740" ]
2309.09013#114
Bridging Dense and Sparse Maximum Inner Product Search
SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (Virtual Event, Canada). 2288â 2292. [26] Luyu Gao, Zhuyun Dai, and Jamie Callan. 2021. COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021. 3030â 3042. [27] Bob Goodwin, Michael Hopcroft, Dan Luu, Alex Clemmer, Mihaela Curmei, Sameh Elnikety, and Yuxiong He. 2017.
2309.09013#113
2309.09013#115
2309.09013
[ "2104.05740" ]
2309.09013#115
Bridging Dense and Sparse Maximum Inner Product Search
BitFunnel: Revisiting Signatures for Search. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (Shinjuku, Tokyo, Japan). 605â 614. [28] Ruiqi Guo, Philip Sun, Erik Lindgren, Quan Geng, David Simcha, Felix Chern, and Sanjiv Kumar. 2020. Accelerating Large-Scale Inference with Anisotropic Vector Quantization. In Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research). 3887â
2309.09013#114
2309.09013#116
2309.09013
[ "2104.05740" ]
2309.09013#116
Bridging Dense and Sparse Maximum Inner Product Search
3896. [29] Qiang Huang, Jianlin Feng, Yikai Zhang, Qiong Fang, and Wilfred Ng. 2015. Query-Aware Locality-Sensitive Hashing for Approximate Nearest Neighbor Search. Proc. VLDB Endow. 9, 1 (sep 2015), 1â 12. [30] Piotr Indyk and Rajeev Motwani. 1998. Approximate Nearest Neighbors:
2309.09013#115
2309.09013#117
2309.09013
[ "2104.05740" ]
2309.09013#117
Bridging Dense and Sparse Maximum Inner Product Search
Towards Removing the Curse of Dimension- ality. In Proceedings of the Thirtieth Annual ACM Symposium on Theory of Computing (Dallas, Texas, USA). 604â 613. [31] Hervé Jégou, Matthijs Douze, and Cordelia Schmid. 2011. Product Quantization for Nearest Neighbor Search. IEEE Trans. Pattern Anal. Mach. Intell. 33, 1 (2011), 117â 128. [32] Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2021.
2309.09013#116
2309.09013#118
2309.09013
[ "2104.05740" ]
2309.09013#118
Bridging Dense and Sparse Maximum Inner Product Search
Billion-Scale Similarity Search with GPUs. IEEE Transactions on Big Data 7 (2021), 535â 547. [33] William B. Johnson and Joram Lindenstrauss. 1984. Extensions of Lipschitz mappings into Hilbert space. Contemp. Math. 26 (1984), 189â 206. [34] Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020.
2309.09013#117
2309.09013#119
2309.09013
[ "2104.05740" ]
2309.09013#119
Bridging Dense and Sparse Maximum Inner Product Search
Dense Passage Retrieval for Open-Domain Question Answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). [35] Hyunjoong Kim, Han Kyul Kim, and Sungzoon Cho. 2020. Improving spherical k-means for document clustering: Fast initialization, sparse centroid projection, and efficient cluster labeling. Expert Systems with Applications 150 (2020), 113288. [36] Aditya Krishnan and Edo Liberty. 2021. Projective Clustering Product Quantization. arXiv:2112.02179 [cs.DS] [37] Saar Kuzi, Mingyang Zhang, Cheng Li, Michael Bendersky, and Marc Najork. 2020.
2309.09013#118
2309.09013#120
2309.09013
[ "2104.05740" ]
2309.09013#120
Bridging Dense and Sparse Maximum Inner Product Search
Leveraging Semantic and Lexical Matching to Improve the Recall of Document Retrieval Systems: A Hybrid Approach. (2020). arXiv:2010.01195 [cs.IR] [38] Carlos Lassance and Stéphane Clinchant. 2022. An Efficiency Study for SPLADE Models. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (Madrid, Spain). 2220â 2226. [39] Jimmy Lin and Xueguang Ma. 2021. A Few Brief Notes on DeepImpact, COIL, and a Conceptual Framework for Information Retrieval Techniques. arXiv:2106.14807 [cs.IR]
2309.09013#119
2309.09013#121
2309.09013
[ "2104.05740" ]
2309.09013#121
Bridging Dense and Sparse Maximum Inner Product Search
[40] Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2021. Pretrained Transformers for Text Ranking: BERT and Beyond. arXiv:2010.06467 [cs.IR] [41] Jimmy Lin and Andrew Trotman. 2015. Anytime Ranking for Impact-Ordered Indexes. In Proceedings of the 2015 International Conference on The Theory of Information Retrieval (Northampton, Massachusetts, USA). 301â 304. [42] Jie Liu, Xiao Yan, Xinyan Dai, Zhirong Li, James Cheng, and Ming-Chang Yang. 2019. Understanding and Improving Proximity Graph based Maximum Inner Product Search. arXiv:1909.13459 [cs.IR]
2309.09013#120
2309.09013#122
2309.09013
[ "2104.05740" ]
2309.09013#122
Bridging Dense and Sparse Maximum Inner Product Search
111:33 111:34 111:34 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty [43] Changyi Ma, Fangchen Yu, Yueyao Yu, and Wenye Li. 2021. Learning Sparse Binary Code for Maximum Inner Product Search. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management (Virtual Event, Queensland, Australia). 3308â 3312. [44] Ji Ma, Ivan Korotkov, Keith Hall, and Ryan T.
2309.09013#121
2309.09013#123
2309.09013
[ "2104.05740" ]
2309.09013#123
Bridging Dense and Sparse Maximum Inner Product Search
McDonald. 2020. Hybrid First-stage Retrieval Models for Biomedical Literature. In CLEF. [45] Xueguang Ma, Kai Sun, Ronak Pradeep, and Jimmy J. Lin. 2021. A Replication Study of Dense Passage Retriever. (2021). arXiv:2104.05740 [cs.IR] [46] Joel Mackenzie, Antonio Mallia, Alistair Moffat, and Matthias Petri. 2022. Accelerating Learned Sparse Indexes Via Term Impact Decomposition. In Findings of the Association for Computational Linguistics: EMNLP 2022. Association for Computational Linguistics, 2830â 2842.
2309.09013#122
2309.09013#124
2309.09013
[ "2104.05740" ]
2309.09013#124
Bridging Dense and Sparse Maximum Inner Product Search
[47] Joel Mackenzie, Matthias Petri, and Alistair Moffat. 2021. Anytime Ranking on Document-Ordered Indexes. ACM Transactions on Information Systems 40, 1, Article 13 (Sep 2021), 32 pages. [48] Joel Mackenzie, Andrew Trotman, and Jimmy Lin. 2021. Wacky Weights in Learned Sparse Representations and the Revenge of Score-at-a-Time Query Evaluation. arXiv:2110.11540 [cs.IR] [49] Joel Mackenzie, Andrew Trotman, and Jimmy Lin. 2022. Efficient Document-at-a-Time and Score-at-a-Time Query Evaluation for Learned Sparse Representations. ACM Transactions on Information Systems (Dec 2022).
2309.09013#123
2309.09013#125
2309.09013
[ "2104.05740" ]
2309.09013#125
Bridging Dense and Sparse Maximum Inner Product Search
[50] Yu. A. Malkov and D. A. Yashunin. 2016. Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs. arXiv:1603.09320 [cs.DS] [51] Antonio Mallia, Omar Khattab, Torsten Suel, and Nicola Tonellotto. 2021. Learning Passage Impacts for Inverted Indexes. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (Virtual Event, Canada). 1723â
2309.09013#124
2309.09013#126
2309.09013
[ "2104.05740" ]
2309.09013#126
Bridging Dense and Sparse Maximum Inner Product Search
1727. [52] Antonio Mallia, Joel Mackenzie, Torsten Suel, and Nicola Tonellotto. 2022. Faster Learned Sparse Retrieval with Guided Traversal. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (Madrid, Spain). 1901â 1905. [53] Antonio Mallia, Giuseppe Ottaviano, Elia Porciani, Nicola Tonellotto, and Rossano Venturini. 2017.
2309.09013#125
2309.09013#127
2309.09013
[ "2104.05740" ]
2309.09013#127
Bridging Dense and Sparse Maximum Inner Product Search
Faster BlockMax WAND with Variable-Sized Blocks. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (Shinjuku, Tokyo, Japan). 625â 634. [54] Antonio Mallia and Elia Porciani. 2019. Faster BlockMax WAND with Longer Skipping. In Advances in Information Retrieval. 771â 778. [55] Stanislav Morozov and Artem Babenko. 2018.
2309.09013#126
2309.09013#128
2309.09013
[ "2104.05740" ]
2309.09013#128
Bridging Dense and Sparse Maximum Inner Product Search
Non-metric Similarity Graphs for Maximum Inner Product Search. In Advances in Neural Information Processing Systems. [56] Behnam Neyshabur and Nathan Srebro. 2015. On Symmetric and Asymmetric LSHs for Inner Product Search. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37 (Lille, France). 1926â 1934. [57] Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. (November 2016). [58] Yuxin Peng, Xin Huang, and Yunzhen Zhao. 2018. An Overview of Cross-Media Retrieval: Concepts, Methodologies, IEEE Transactions on Circuits and Systems for Video Technology 28, 9 (Sep 2018), Benchmarks, and Challenges. 2372â
2309.09013#127
2309.09013#129
2309.09013
[ "2104.05740" ]
2309.09013#129
Bridging Dense and Sparse Maximum Inner Product Search
2385. [59] Matthias Petri, Alistair Moffat, Joel Mackenzie, J. Shane Culpepper, and Daniel Beck. 2019. Accelerated Query Processing Via Similarity Score Prediction. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (Paris, France). 485â 494. [60] Giulio Ermanno Pibiri and Rossano Venturini. 2020. Techniques for Inverted Index Compression. ACM Comput.
2309.09013#128
2309.09013#130
2309.09013
[ "2104.05740" ]
2309.09013#130
Bridging Dense and Sparse Maximum Inner Product Search
Surv. 53, 6, Article 125 (dec 2020), 36 pages. [61] Rameshwar Pratap, Debajyoti Bera, and Karthik Revanuru. 2019. Efficient Sketching Algorithm for Sparse Binary Data. In 2019 IEEE International Conference on Data Mining (ICDM). 508â 517. [62] Stephen E. Robertson, Steve Walker, Susan Jones, Micheline Hancock-Beaulieu, and Mike Gatford. 1994.
2309.09013#129
2309.09013#131
2309.09013
[ "2104.05740" ]
2309.09013#131
Bridging Dense and Sparse Maximum Inner Product Search
Okapi at TREC-3.. In TREC (NIST Special Publication, Vol. 500-225), Donna K. Harman (Ed.). National Institute of Standards and Technology (NIST), 109â 126. [63] Anshumali Shrivastava and Ping Li. 2014. Asymmetric LSH (ALSH) for Sublinear Time Maximum Inner Product Search (MIPS). In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2 (Montreal, Canada). MIT Press, Cambridge, MA, USA, 2321â 2329. [64] Y. Song, Y. Gu, R. Zhang, and G. Yu. 2021.
2309.09013#130
2309.09013#132
2309.09013
[ "2104.05740" ]
2309.09013#132
Bridging Dense and Sparse Maximum Inner Product Search
ProMIPS: Efficient High-Dimensional c-Approximate Maximum Inner Product Search with a Lightweight Index. In 2021 IEEE 37th International Conference on Data Engineering (ICDE). Los Alamitos, CA, USA, 1619â 1630. [65] Shulong Tan, Zhaozhuo Xu, Weijie Zhao, Hongliang Fei, Zhixin Zhou, and Ping Li. 2021. Norm Adjusted Proximity Graph for Fast Inner Product Retrieval. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & # Bridging Dense and Sparse Maximum Inner Product Search # Data Mining (Virtual Event, Singapore). 1552â
2309.09013#131
2309.09013#133
2309.09013
[ "2104.05740" ]
2309.09013#133
Bridging Dense and Sparse Maximum Inner Product Search
1560. [66] Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). [67] Mo Tiwari, Ryan Kang, Je-Yong Lee, Donghyun Lee, Chris Piech, Sebastian Thrun, Ilan Shomorony, and Martin Jinye Zhang. 2023. Faster Maximum Inner Product Search in High Dimensions. arXiv:2212.07551 [cs.LG] [68] Nicola Tonellotto, Craig Macdonald, and Iadh Ounis. 2018. Efficient Query Processing for Scalable Web Search. Foundations and Trends in Information Retrieval 12, 4â 5 (Dec 2018), 319â 500.
2309.09013#132
2309.09013#134
2309.09013
[ "2104.05740" ]
2309.09013#134
Bridging Dense and Sparse Maximum Inner Product Search
[69] Howard Turtle and James Flood. 1995. Query Evaluation: Strategies and Optimizations. Information Processing and Management 31, 6 (November 1995), 831â 850. [70] Bhisham Dev Verma, Rameshwar Pratap, and Debajyoti Bera. 2022. Efficient Binary Embedding of Categorical Data using BinSketch. Data Mining and Knowledge Discovery 36 (2022), 537â 565. [71] Mengzhao Wang, Xiaoliang Xu, Qiang Yue, and Yuxiang Wang. 2021. A Comprehensive Survey and Experimental Comparison of Graph-Based Approximate Nearest Neighbor Search. Proc. VLDB Endow. 14, 11 (jul 2021), 1964â 1978. [72] Shuai Wang, Shengyao Zhuang, and Guido Zuccon. 2021. BERT-Based Dense Retrievers Require Interpolation with BM25 for Effective Passage Retrieval. In Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval (Virtual Event, Canada). 317â 324. [73] David P. Woodruff. 2014.
2309.09013#133
2309.09013#135
2309.09013
[ "2104.05740" ]
2309.09013#135
Bridging Dense and Sparse Maximum Inner Product Search
Sketching as a Tool for Numerical Linear Algebra. Foundations and Trends in Theoretical Computer Science 10, 1â 2 (Oct 2014), 1â 157. [74] Xiang Wu, Ruiqi Guo, Sanjiv Kumar, and David Simcha. 2019. Local Orthogonal Decomposition for Maximum Inner Product Search. arXiv:1903.10391 [cs.LG] [75] Xiang Wu, Ruiqi Guo, David Simcha, Dave Dopson, and Sanjiv Kumar. 2019. Efficient Inner Product Approximation in Hybrid Spaces. (2019). arXiv:1903.08690 [cs.LG] [76] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Å ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016.
2309.09013#134
2309.09013#136
2309.09013
[ "2104.05740" ]
2309.09013#136
Bridging Dense and Sparse Maximum Inner Product Search
Googleâ s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. [77] Xiao Yan, Jinfeng Li, Xinyan Dai, Hongzhi Chen, and James Cheng. 2018. Norm-Ranging LSH for Maximum Inner Product Search. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (Montréal, Canada). 2956â 2965. [78] Jheng-Hong Yang, Xueguang Ma, and Jimmy Lin. 2021. Sparsifying Sparse Representations for Passage Retrieval by
2309.09013#135
2309.09013#137
2309.09013
[ "2104.05740" ]
2309.09013#137
Bridging Dense and Sparse Maximum Inner Product Search
Top-ð Masking. arXiv:2112.09628 [cs.IR] [79] Hamed Zamani, Mostafa Dehghani, W. Bruce Croft, Erik Learned-Miller, and Jaap Kamps. 2018. From Neural Re- Ranking to Neural Ranking: Learning a Sparse Representation for Inverted Indexing. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management (Torino, Italy). 497â 506. [80] Wengang Zhou, Houqiang Li, and Qi Tian. 2017.
2309.09013#136
2309.09013#138
2309.09013
[ "2104.05740" ]
2309.09013#138
Bridging Dense and Sparse Maximum Inner Product Search
Recent Advance in Content-based Image Retrieval: A Literature Survey. arXiv:1706.06064 [cs.MM] [81] Zhixin Zhou, Shulong Tan, Zhaozhuo Xu, and Ping Li. 2019. Möbius Transformation for Fast Inner Product Search on Graph. [82] Shengyao Zhuang and Guido Zuccon. 2022. Fast Passage Re-ranking with Contextualized Exact Term Matching and Efficient Passage Expansion. In Workshop on Reaching Efficiency in Neural Information Retrieval, the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval.
2309.09013#137
2309.09013#139
2309.09013
[ "2104.05740" ]
2309.09013#139
Bridging Dense and Sparse Maximum Inner Product Search
[83] Justin Zobel and Alistair Moffat. 2006. Inverted Files for Text Search Engines. Comput. Surveys 38, 2 (Jul 2006), 6â es. A PROOF OF THEOREM 4.2 Fix two vectors ð ¢ and ð £ â Rð . Define ð Sketch = â ¨ð (ð ¢), ð (ð £)â © as the random variable representing the inner product of sketches of size ð , prepared using the projection ð (ð ¢) = ð ð ¢, with ð â ð }ð à ð . ð
2309.09013#138
2309.09013#140
2309.09013
[ "2104.05740" ]
2309.09013#140
Bridging Dense and Sparse Maximum Inner Product Search
Sketch is an unbiased estimator of â ¨ð ¢, ð £â ©. Its distribution tends to a Gaussian {â 1/ with variance: # 1 ð 1 â (ilullllall3 + (u, 0)? â 29° uz). Proor. Consider the random variable Z = (dX; Rjuj) ( dk Ryvk), where R;â s are Rademacher random variables. It is clear that nZ is the product of the sketch coordinate i (for any i): f(u)i¢(v);.
2309.09013#139
2309.09013#141
2309.09013
[ "2104.05740" ]
2309.09013#141
Bridging Dense and Sparse Maximum Inner Product Search
111:35 111:35 111:36 111:36 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty We can expand the expected value of ð as follows: B[Z] = BI ( » Ryuj)( » Rev) | â De u;0;] eye Ryu jo%] j#k = =m v; E[R?] + >) woe ELR; Ry] â â = Oitk eed 1 0 = (u,v). The variance of ð can be expressed as follows: Var(Z) = E[Z?] â E[Z]? BUD Rin) (2 Reew) ] = (u,v)?
2309.09013#140
2309.09013#142
2309.09013
[ "2104.05740" ]
2309.09013#142
Bridging Dense and Sparse Maximum Inner Product Search
We have the following: BUD Rie) (2 Ree) = Bl( Qi + >) RR; uju;) (Die+ 2 Rekioe)| i#j k#l = |lullZlloll3 1 21 ReRoxe] + ELD oy » RRjuju; 1+E,)° RiRjujuj » R,Ryo0)] - i k#l i#j i#j 0 0 (12) (13) The last term can be decomposed as follows: E[ » R)R)RRyujujonvy| + it j#k4l E[ » R)RpRERyujujoro |+ i=k,j#lVitk, j=l E[ » R)R)R_Ryujujoe| i#j,i=k, j=lVi#j,i=l jak The first two terms are 0 and the last term can be rewritten as follows: 2B » uj0;( » Uj0j â ujv;) | = 2(u,v)? â 2) 2B » uj0;( » Uj0j â ujv;) | = 2(u,v)? â 2) uz? fi 7 7 (14)
2309.09013#141
2309.09013#143
2309.09013
[ "2104.05740" ]
2309.09013#143
Bridging Dense and Sparse Maximum Inner Product Search
We now substitute the last term in Equation (13) with Equation (14) to obtain: Var (ð ) = â ¥ð ¢ â ¥2 2 + â ¨ð ¢, ð £â ©2 â 2 â ï¸ 2â ¥ð £ â ¥2 ð ð £ 2 ð ¢2 ð . (15) ð Observe that Zsxercu = 1/n )); $(u)iG(v); is the sum of independent, identically distributed random variables. Furthermore, for bounded vectors u and v, the variance is finite. By the application of the Central Limit Theorem, we can deduce that the distribution of Zsyprc¢y tends to a normal distribution with the stated expected value. Noting that Var(Zsxrren) = 1/n? D; Var(Z) gives the desired variance. ao # Bridging Dense and Sparse Maximum Inner Product Search B PROOF OF THEOREM 4.3 Fix a query vector ð
2309.09013#142
2309.09013#144
2309.09013
[ "2104.05740" ]
2309.09013#144
Bridging Dense and Sparse Maximum Inner Product Search
â Rð and let ð be a random vector drawn according to the following probabilistic model. Coordinate ð , ð ð , is non-zero with probability ð ð > 0 and, if it is non-zero, draws its value from a distribution with mean ð and variance ð 2. ð Sketch = â ¨ð (ð ), ð (ð )â ©, with ð (ð ¢) = ð ð ¢ and ð â {â 1/ 1/-Vn}"*N, has expected value p >); pigi and variance: ð , 1/ 1 ð â lu? +0°)(Iigll3 >) Pi - > pidt) +1°((D) aii)? â > \(GiPi)â ) | i i i i Proof. It is easy to see that: E[ð Sketch] = â ï¸ ð ð E[ð ð ] = ð â ï¸ ð ð ð ð . ð ð â ï¸
2309.09013#143
2309.09013#145
2309.09013
[ "2104.05740" ]
2309.09013#145
Bridging Dense and Sparse Maximum Inner Product Search
As for variance, we start from Theorem 4.2 and arrive at the following expression: ~ (ligl$2U1X13] + 8L(q.X)"] - 2 )) g?21X71). (16) i where the expectation is with respect to ð . Let us consider the terms inside the parentheses one by one. The first term becomes: â ï¸ â ¥ð â ¥2 2E[â ¥ð â ¥2 2] = â ¥ð â ¥2 2 E[ð 2 ð ] ð = â ¥ð â ¥2 2(ð 2 + ð 2) â ï¸ ð ð . ð
2309.09013#144
2309.09013#146
2309.09013
[ "2104.05740" ]
2309.09013#146
Bridging Dense and Sparse Maximum Inner Product Search
# The second term reduces to: BL(q.X)"] = E[(q.X)]* + Var[(q.xX)]+ = 1°) ap)? + > i [ue + o°)pi - wp? i =1((D\ aid - >) apt) + >) apieâ + 0°). i i i Finally, the last term breaks down to: â 2 â ï¸ ð ] = â 2 â ï¸ ð E[ð 2 ð 2 ð (ð 2 + ð 2)ð ð ð 2 ð ð = â 2(ð 2 + ð 2) â ï¸ ð 2 ð ð ð . ð
2309.09013#145
2309.09013#147
2309.09013
[ "2104.05740" ]
2309.09013#147
Bridging Dense and Sparse Maximum Inner Product Search
Putting all these terms back into Equation (16) yields the desired expression for variance. Putting all these terms back into Equation (16) yields the desired expression for variance. 0 C PROOF OF THEOREM 4.5 Let ð be a random vector drawn according to the following probabilistic model. Coordinate ð , ð ð , is non-zero with probability ð ð > 0 and, if it is non-zero, draws its value from a distribution with PDF ð and CDF Φ.
2309.09013#146
2309.09013#148
2309.09013
[ "2104.05740" ]
2309.09013#148
Bridging Dense and Sparse Maximum Inner Product Search
Then: â « P[X ni) â Xi < 5] © (1â pi)(e7 1-8) Dies Ps) +p | ew m1 W(049)) Lisi Pi h(a) dor Proof. Decomposing the probability of the event by conditioning on whether ð ð is â activeâ (i.e., its value is drawn from the distribution with PDF ð ) or â inactiveâ (i.e., it is 0), we arrive at: P[ð ð (ð ) â ð ð â ¤ ð ¿] = ð ð P[ð ð (ð ) â ð ð â ¤ ð ¿ | ð ð is active] + (1 â ð ð )P[ð ð (ð ) â ¤ ð ¿ | ð ð is inactive]. 111:37 â ¡ 111:38
2309.09013#147
2309.09013#149
2309.09013
[ "2104.05740" ]
2309.09013#149
Bridging Dense and Sparse Maximum Inner Product Search
# Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty The term conditioned on ð ð being active is given by Theorem 5.4 of [16]. The other event involving an inactive ð ð happens when all values that collide with ð ð (ð ) are less than or equal to ð ¿. This event is equivalent to the event that every active coordinate whose value is greater than ð ¿ maps to any sketch coordinate except ð
2309.09013#148
2309.09013#150
2309.09013
[ "2104.05740" ]
2309.09013#150
Bridging Dense and Sparse Maximum Inner Product Search
. Using this alternative event, we can write the conditional probability as follows: (1 â Ly 0-88) Eyes PF wy eH OAM) Dyers, m (1 â where we used ð â 1 â (1 â 1/ð )ð . That completes the proof. â ¡
2309.09013#149
2309.09013
[ "2104.05740" ]
2309.07915#0
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
3 2 0 2 t c O 2 ] L C . s c [ 2 v 5 1 9 7 0 . 9 0 3 2 : v i X r a Preprint # MMICL: EMPOWERING VISION-LANGUAGE MODEL WITH MULTI-MODAL IN-CONTEXT LEARNING Haozhe ZhaoË 1, Zefan CaiË 1, Shuzheng SiË 1, Xiaojian Ma2, Kaikai An1, Liang Chen1, Zixuan Liu3, Sheng Wang3, Wenjuan Han:4, Baobao Chang:1 1National Key Laboratory for Multimedia Information Processing, Peking University 2National Key Laboratory of General Artificial Intelligence, BIGAI 3Paul G. Allen School of Computer Science and Engineering, University of Washington 4Beijing Jiaotong University [email protected], [email protected] https://github.com/PKUnlp-icler/MIC # ABSTRACT
2309.07915#1
2309.07915
[ "2305.15023" ]
2309.07915#1
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. How- ever, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in down- stream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLMâ s ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state- of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi- modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context. Our code, dataset and model are available at https://github.com/PKUnlp- icler/MIC. # INTRODUCTION General-purpose vision-language pre-trained models (VLMs) have made significant advancements (Li et al., 2022; 2023d;g; Zhu et al., 2023; Li et al., 2023b). Recent VLMs mostly augment a large language model (LLM) with a visual encoder and exhibit impressive zero-shot capacities in various visual tasks. However, unlike LLMs that can extract rich background knowledge and task information from the prompt with in-context learning (ICL), most VLMs still struggle to understand complex multi-modal prompts that include multiple images. Previous studies (Li et al., 2023d;b) primarily focus on handling the user queries with a single image rather than multi-modal prompts with interleaved multiple images and text.
2309.07915#0
2309.07915#2
2309.07915
[ "2305.15023" ]
2309.07915#2
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Although some VLMs like Flamingo (Alayrac et al., 2022) and Kosmos-1 (Huang et al., 2023b) can handle user queries with multiple images, their pre-training data can not provide more sophisticated multi-modal prompts than interleaved image and text crawled from the web (Awadalla et al., 2023). Hence, there is a gap between the prompts used in pre-training these VLMs and the user queries in real-world scenarios, which always contain multiple images and more sophisticated text. Specifically, these VLMs may suffer from the following three limitations, which makes VLMs less effective in downstream vision-language tasks. Hard to Understand Text-to-Image Reference: Previous studies rarely attempt to address the issue of text-to-image reference in the multi-modal prompts. However, there are often intricate referential relationships between the text and images in user queries, with different words mentioning different
2309.07915#1
2309.07915#3
2309.07915
[ "2305.15023" ]
2309.07915#3
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
1 # Preprint Please describe the information of Carefully analyze the given images eel ardengnantogienitem & Tell me, is there a horse ay 2 4 . especially the relationship What differences does between and & . Niel and ees Itis not possible to tell from the @ image a horse is in the image. have? â Aman is seen walking down a street Here are some examples. Tiel aca o ea WEIGEL Sporty car on one side of road vs. off- sitting on the man's head, and it seems 2 i road jeep with mountainous terrain in There is a horse in . locighatoual: the background on the other hand. re â (a) â _ while there is no horse in ga - The baby is crying as he broke the Please take a closer look at the two se ser loo cup. Therefore, please refer to = images and explain the connection EE Gehman sonilani and tell me if there is a horse in the â They has similarity in size and shape 7 of vehicle, One is on the road and the (|. other has mountainous terrain with ir) TEN hes; CONE APD snow capped mountains in the Yes, there is a horse in the image and cup. @ foreground. it's standing on a grassy field (b) © (dd) What happens to the He fall to the ground in front of a green grassy man after hitting the area with trees and shrubbery surrounding the @ ball? area where he hit the ball from his golf. © BO Treseimages depict the growth â The image 0 is just germinating, the image 1 is \ phases of the tree, please describe the only a bare trunk, the image 2 is luxuriant, and @ contents of each image carefully. the image 3 is a growing plant. (f) Figure 1: Examples of vision-language dialogue generated by MMICLtypically contain prompts with interleaved images and text. MMICL understands spatial (a), logical (b), and temporal (e) relationships among images. MMICL can also grasp text-to-image references as (c),(d) and (f). images.
2309.07915#2
2309.07915#4
2309.07915
[ "2305.15023" ]
2309.07915#4
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
For example, the user may ask a specific question about multiple images(Fig. 1.c and Fig. 1.f) or use multiple images as exemplars to ask the question only about a specific image(Fig. 1.d). However, the training data used in previous studies (Li et al., 2023d; Alayrac et al., 2022; Huang et al., 2023a) are crawled from the web and may lack explicit text-to-image references. VLMs thus might fail to handle user queries involving intricate text-to-image references. Hard to Understand the Relationships between Multiple Images: There are often spatial, temporal, and logical relationships between multiple images, and correctly understanding them allows the model to handle user queries better. However, the pre-training data used by previous VLMs (Alayrac et al., 2022) are collected from the internet, lacking close connections among images, especially when these images are far apart on the same webpage. It hampers the ability of VLMs to understand the intricate relationships among the images and further limits their reasoning ability. Hard to Learn from In-Context Multi-Modal Demonstrations: Previous studies have shown that pretrained LLMs can benefit from few in-context demonstrations (Brown et al., 2020; Dong et al., 2023). However, the ICL ability of current VLMs is rather limited, specifically: 1) VLMs like BLIP-2 (Li et al., 2023d), LLaVA (Li et al., 2023b) only support multi-modal prompts with a single image, hampering their abilities to use multiple multi-modal demonstrations to enhance their performance during the inference; 2)Although VLMs such as Flamingo (Alayrac et al., 2022) support multi-image inputs during pretraining and emerge ICL abilities, their context schemes fall to provide text-image references and closely related images. It inhibits them from offering sophisticated enough prompts to the VLMs, thereby limiting the effectiveness of their ICL ability. Besides, the lack of further supervised instruction tuning hinders their effectiveness across downstream tasks. In this paper, to address the aforementioned limitations 1) We present MMICL, a new approach to allow VLMs to efficiently deal with multi-modal inputs, including relationships among multiple images and text-to-image references. 2) We propose a novel context scheme in which incorporating
2309.07915#3
2309.07915#5
2309.07915
[ "2305.15023" ]
2309.07915#5
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
2 Preprint â â embed = Text Text. Text embed embed Text Text Text = teat | Text Bel embed t t t VPG PG PG â VPG t t t t Img Img Img Img (a) VLMs Focused on a single image (b) VLMs with few-shot ability (c) MMICL Figure 2: Comparison of different VLM architectures: VLMs focused on a single image, VLMs with few-shot ability, and MMICL with equal treatment of image and text representation. an extra image declaration section, along with the inclusion of image proxy tokens, enhances the ICL ability of the VLM. 3) We construct a multi-modal in-context learning dataset in accordance with the proposed scheme. The dataset is adapted from a range of existing datasets and can be used to provide support for the training of more capable VLMs. Our experiments show that MMICL achieves new state-of-the-art performance on various of vision- language benchmarks including MME (Fu et al., 2023) and MMBench (Liu et al., 2023c) *. Com- prehensive examinations of the three limitations we aim to address reveal that MMICL exhibits exceptional ability in understanding text-to-image references (13-points improvement on the vision- language compositionality benchmark, Winoground (Thrush et al., 2022a)) and intricate relationships among images(12-points improvement on the multi-image reasoning benchmark, RAVEN (Huang et al., 2023a)). Moreover, MMICL demonstrates impressive multi-model ICL performance across var- ious tasks. We also observe that MMICL efficiently mitigates the language bias, which often causes VLMs to ignore visual contents when facing extensive textual contexts, leading to hallucinations. # 2 MMICL 2.1 MODEL ARCHITECTURE Most VLMs utilize Visual-Prompt Generators (VPG) (e.g., Resampler (Alayrac et al., 2022), Q- former (Li et al., 2023d)) to extract visual embeddings from the image features encoded by vision backbones and use visual embeddings to help LLMs understand visual inputs.
2309.07915#4
2309.07915#6
2309.07915
[ "2305.15023" ]
2309.07915#6
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
The model architecture shown in Fig. 2.a belongs to VLMs that focus on prompts with a single image, such as Blip-2 (Li et al., 2023d), which always places the image at the top of the entire input and can not handle the inputs with multiple images In Fig. 2.b, VLMs with few-shot ability, such as Flamingo (Alayrac et al., 2022), encode images into image embeddings with a fixed number of visual tokens and use cross-attentions in LLM to mixture the visual and text content. Different from previous work, MMICL shown in Fig. 2.c treats image and text representations equally and establishes the reference between image and text via image declaration. It enables users to have the flexibility to input multiple images and text in any desired order, with no restrictions on the quantity or placement of images in contexts. As shown in Fig. 4, each given image is encoded by a vision encoder (e.g., ViT (Radford et al., 2021)) to get the image representation. Then, we use the Q-former as the VPG to extract the visual embedding. We utilize a fully connected layer as the projection layer to convert each visual embedding to the same dimension as the text embedding of the LLM. Finally, we combine the visual embeddings of multiple images with text embeddings in an interleaved style and feed them into the LLM. We set the weights for mapping query and value vectors in the attention layer of LLM as learnable to better adapt to multi-modal prompts with multiple images.
2309.07915#5
2309.07915#7
2309.07915
[ "2305.15023" ]
2309.07915#7
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
More details are presented in Appendix D. 2.2 THE DESIGN OF CONTEXT SCHEME OF MMICL In this section, we outline the design of the Context Scheme for MMICL. The proposed scheme is devised to proficiently transform the interleaved image-text data into the training context for MMICL. *Results of MMICL are submitted on August 28th, 2023. 3 Preprint Original VL Task (a) Image Declaration (b) Multi-modal Data with Interconnected Images Visual Question Answering Carefully analyze image j: [TMG)] Carefully analyze images to answer the question. ¢ ¢ fe a gt Se the questi In image 0: [IMGo] 4° - image 1: [[MG,] \* fA ki (i> Sey 0 answer the question. ge 0: [IMGo] \ * - â Hi, is image 1: [1MG,] \ Are the men and women Q: Are the men and women are quarrelling with image 2: [[MG2] *opijis ? are quarreling? quarreling? ki Answer: Yes A: Yes ety (c) Unified Multi-modal-in-context Format â Q: The image 0 is [/MGo] . Carefully analyze the Image Captioning The image j is [JMG] image 0 to generate a concise and accurate description that accurately represents the objects, people, or scenery present. A: An airplane flying in the sky, isi @ Carefully analyze image j to ©. generate a concise and accurate An airplane flying description that accurately Q: The image j is [[MG;]/ _â ~__. Carefully analyze the in the sky, represents the objects, people, and & ¢ scenery present image j to generate a concise and accurate description that accurately represents the objects, people, or scenery present. A ea 2) Machine Annotation Manual Annotation [/MG] Image Proxy Figure 3: Context scheme for MMICL, which seamlessly transforms the interleaved image-text data into training context in a unified format 2.2.1 IMAGE DECLARATION Users may use textual descriptions to refer to particular images in their queries. Such reference can provide information about the visual content mentioned in the text to the VLM, allowing it to learn alignment between two modalities.
2309.07915#6
2309.07915#8
2309.07915
[ "2305.15023" ]
2309.07915#8
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
To precisely link text and image, we form image declaration templates for each image in mixed inputs, as shown in Fig. 3.a. Firstly, we allocate a unique image proxy ([IMGj]) to reference the visual embedding of image j, which provides a unique identifier for VLMs to index and distinguish between visual and text embeddings. Then, we utilize natural language prompts to establish references between text and image. Incorporating the explicit text-to-image reference in the image declaration assists the model in correlating the text with the appropriate image. Meanwhile, the image declaration, maintained as textual content, can also preserve the flexibility to appear at any position within the prompt. Each instance Ii follows the structure, where the Xi symbolizes the set of image decorations that can be placed anywhere within the instance Ii. qi and ai denote the question with instruction and corresponding answer, respectively. Ii â pXi, qi, aiq (1)
2309.07915#7
2309.07915#9
2309.07915
[ "2305.15023" ]
2309.07915#9
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
2.2.2 MULTI-MODEL DATA WITH INTERCONNECTED IMAGES To incorporate abundant multi-image information within the context schema of MMICL, we generate interconnected multi-image data that includes spatial, logical, and temporal relationships. It aids MMICLin understanding the intricate relationships among images in user queries. Specifically, we derive frames from videos to build multi-image data. The frames extracted from video inherently sustain close temporal and spatial relations, which infuse spatial and temporal correlation information among images into the context scheme. Besides, we build multi-image data from images depicting multiple object interactions. We detect the objects within the image and generate bounding boxes for each object. We acquire multiple sub-images of different objects by cropping the image according to bounding boxes. We then replace the textual references to these objects with their corresponding cropped images, thus forming interleaved multi-modal data with logical and causal interconnected images, as delineated in Fig. 3.b. Each instance Ii comprises a question-answer text pair along with K images, where the xi,k P Xi represents the image declaration for the k-th image.
2309.07915#8
2309.07915#10
2309.07915
[ "2305.15023" ]
2309.07915#10
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Ii â ptx1, x2, . . . , xku, qi, aiq (2) 2.2.3 UNIFIED MULTI-MODAL IN-CONTEXT FORMAT FOR DIFFERENT TASKS We propose a design for producing multi-modal in-context learning data for different tasks to enrich the context scheme of MMICL. It aims to improve the instruction-aware ability of VLM and expand 4 Preprint ¢ 1 ¢ ith In A te 4 vis f © quarrelling with â ii ? Proj Pretraned LLMs rial Yes, image 1 is quarrelling with image 2. j J J Stagel Stagell Pretraining Multi-Mode! In-Context Tuning Si on Encoder | Visi ) UnfreezeQ& Vsâ Text embedding MB Untreeze oma Projection Projection) MB Freeze w Vision Prompt % | Visual embedding meq Image Proxy Figure 4: Illustration of MMICL architecture and training paradigm. The upper part denotes the overview of model architecture and the bottom denotes the pipeline of the two-stage training paradigm. its abilities for proficient multi-modal in-context learning. Specifically, we start by crafting diverse instructions for each task and generate different templates for the task utilizing these instructions. We then fill in the randomly selected template with the original task to assemble data equipped with instructions as Appendix F. Moreover, we convert the data into a multi-modal in-context format by constructing few-shot exemplars generated by sampling instances from the data. These exemplars are combined with the input instance to produce the multi-modal in-context data. In this way, we can transform all tasks into a unified multi-modal in-context format, as illustrated in Fig. 3.c. This method facilitates amassing an extensive amount of high-quality data from different tasks, enriching the context schema of MMICL with an abundant diversity of multi-modal in-context data teeming with diverse instructions. Ultimately, this improves the modelâ s ability to follow instructions and multi-modal in-context learning ability. Each instance Ii comprises N exemplars. Ii â ptP1, ¨ ¨ ¨ , PN u, Xi, qi, aiq Each exemplar Pj â pXj, qj, ajq, Xj denotes the image declaration of the j-th exemplar. qj and aj denote the question and answer for the j-th exemplar, respectively.
2309.07915#9
2309.07915#11
2309.07915
[ "2305.15023" ]
2309.07915#11
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
2.3 MULTIMODALITY IN-CONTEXT LEARNING (MIC) DATASET CONSTRUCTION To help VLMs understand the complex prompts, we construct MIC dataset by gathering data from public data resources and converting them based on the context scheme. It has three key aspects: 1) image declaration, 2) multi-modal data with closely related images, and 3) multi- modal in-context data for different tasks. Training set of MIC comes from 16 datasets across 8 categories, while the test set comes from 18 datasets across 10 categories.
2309.07915#10
2309.07915#12
2309.07915
[ "2305.15023" ]
2309.07915#12
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Ad- ditional details can be found in Ap- pendix B and Appendix C. Algorithm 1 Image Declaration Require: Interleaved multi-modal input: X, containing visual embedding: V â tv1, v2, . . .u and text embedding H â th1, h2, . . .u, where vi represents the image embedding and hi represents the span between the image embeddings. Ensure: Interleaved multi-modal input with image declaration: Ë X 1: for each interleaved multi-modal input X do 2: 3: 4: 5: 6: 7: 8: 9: end for n à number of images in X Initialize image proxy tokens rIM G1s, rIM G2s, . . . for each image i in X do end for R à tRef1, Ref2, . . .u Replace vi in X with Refi: Ë X â rRef1, h1, Ref2, h2, . . .s Firstly, we create an image declara- tion per instance in all datasets using Algorithm 1 to generate datasets with explicit text-to-image 5 # Preprint Cognition Perception Model Comm. Num. Text. Code. Existen. Count Pos. Color OCR Poster Cele. Scene Land. Art.
2309.07915#11
2309.07915#13
2309.07915
[ "2305.15023" ]
2309.07915#13
MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
LLaVA MiniGPT-4 MultiModal-GPT VisualGLM-6B VPGTrans LaVIN LLaMA-Adapter-V2 mPLUG-Owl InstructBLIP BLIP-2 Lynx GIT2 Otter Cheetor LRV-Instruction BLIVA 50.00 57.50 57.14 45.00 0.00 59.29 62.50 60.00 49.29 45.00 50.00 39.29 50.00 77.50 64.29 65.00 47.50 87.14 62.50 50.00 81.43 78.57 60.00 80.00 129.29 40.00 65.00 110.00 40.00 65.00 110.71 17.50 42.50 50.00 67.50 99.29 106.43 72.50 57.50 98.57 77.50 57.50 100.71 70.00 85.00 136.43 57.50 77.50 50.00 40.00 55.00 47.50 57.50 50.00 55.00 57.50 57.50 75.00 45.00 45.00 70.00 87.50 72.50 60.00 49.00 50.00 50.00 48.82 50.00 60.50 54.00 71.75 54.41 68.33 59.50 73.82 69.75 68.00 61.67 75.25 53.24 146.25 83.75 85.00 77.25 53.53 141.75 64.75 70.00 47.35 136.75 93.50 87.25 185.00 86.18 148.50 150.25 69.75 120.00 120.00 65.00 136.05 100.29 135.50 159.25 96.25 185.00 143.33 66.67 153.33 72.50 123.81 101.18 153.00 79.75 134.25 160.00 135.00 73.33 148.33 110.00 141.84 105.59 145.25 138.00 136.50 195.00 151.67 90.00 170.00 77.50 124.83 118.24 164.50 162.00 119.50 190.00 118.33 96.67 158.33 65.00 112.59 145.88 158.50 140.50 146.25 88.33 86.67 113.33 72.50 138.78 172.65 158.75 137.25 129.00 195.00 180.00 96.67 80.00 116.67 100.00 147.28 164.12 156.00 145.73 113.50 165.00 111.67 86.67 165.00 110.00 139.04 112.65 147.98 160.53 101.25 180.00 138.33 81.67 180.00 87.50 155.10 140.88 151.50 89.50 133.25 50.00 50.00 55.00 50.00 55.00 43.33 75.00 41.84 55.00 58.33 68.33 57.82 50.00 48.33 55.00 65.99 84.01 85.00 63.33 73.33 88.33 63.33 75.00 107.50 79.59 50.00 48.33 75.00 125.00 99.66 50.00 50.00 55.00 50.00 57.50 82.50 42.50 77.50 MMICL 136.43 82.50 132.50 77.50 170.00 160.00 81.67 156.67 100.00 146.26 141.76 153.75 136.13 135.50 Total Avg. 51.25 51.85 62.97 63.36 74.27 86.66 87.26 88.82 107.47 113.13 113.50 113.85 114.19 115.79 116.29 119.23 129.33
2309.07915#12
2309.07915#14
2309.07915
[ "2305.15023" ]