id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2309.09150#47
Can Large Language Models Understand Real-World Complex Instructions?
Wang, P.; Li, L.; Chen, L.; Zhu, D.; Lin, B.; Cao, Y.; Liu, Q.; Liu, T.; and Sui, Z. 2023b. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926. Xu, B.; Xu, Y.; Liang, J.; Xie, C.; Liang, B.; Cui, W.; and Xiao, Y. 2017.
2309.09150#46
2309.09150#48
2309.09150
[ "2204.02311" ]
2309.09150#48
Can Large Language Models Understand Real-World Complex Instructions?
CN-DBpedia: A never-ending Chinese knowledge extraction system. In International Conference on Industrial, Engineering and Other Applications of Ap- plied Intelligent Systems, 428â 438. Springer. Xu, C.; Guo, D.; Duan, N.; and McAuley, J. 2023a. Baize: An Open-Source Chat Model with Parameter-Efficient Tun- ing on Self-Chat Data. arXiv preprint arXiv:2304.01196. Xu, C.; Sun, Q.; Zheng, K.; Geng, X.; Zhao, P.; Feng, J.; Tao, C.; and Jiang, D. 2023b.
2309.09150#47
2309.09150#49
2309.09150
[ "2204.02311" ]
2309.09150#49
Can Large Language Models Understand Real-World Complex Instructions?
WizardLM: Empowering Large Language Models to Follow Complex Instructions. arXiv:2304.12244. Yao, S.; Chen, H.; Hanjie, A. W.; Yang, R.; and Narasimhan, K. 2023a. COLLIE: Systematic Construction of Constrained Text Generation Tasks. arXiv preprint arXiv:2307.08689. Yao, S.; Zhao, J.; Yu, D.; Du, N.; Shafran, I.; Narasimhan, K.; and Cao, Y. 2023b.
2309.09150#48
2309.09150#50
2309.09150
[ "2204.02311" ]
2309.09150#50
Can Large Language Models Understand Real-World Complex Instructions?
ReAct: Synergizing Reasoning and Acting in Language Models (arXiv: 2210.03629). arXiv. Yu, J.; Wang, X.; Tu, S.; Cao, S.; Zhang-Li, D.; Lv, X.; Peng, H.; Yao, Z.; Zhang, X.; Li, H.; et al. 2023. KoLA: Carefully Benchmarking World Knowledge of Large Language Mod- els. arXiv preprint arXiv:2306.09296.
2309.09150#49
2309.09150#51
2309.09150
[ "2204.02311" ]
2309.09150#51
Can Large Language Models Understand Real-World Complex Instructions?
Zeng, A.; Liu, X.; Du, Z.; Wang, Z.; Lai, H.; Ding, M.; Yang, Z.; Xu, Y.; Zheng, W.; Xia, X.; Tam, W. L.; Ma, Z.; Xue, Y.; Zhai, J.; Chen, W.; Liu, Z.; Zhang, P.; Dong, Y.; and Tang, J. 2023. GLM-130B: An Open Bilingual Pre-trained Model. In The Eleventh International Conference on Learning Rep- resentations (ICLR).
2309.09150#50
2309.09150#52
2309.09150
[ "2204.02311" ]
2309.09150#52
Can Large Language Models Understand Real-World Complex Instructions?
Zha, L.; Zhou, J.; Li, L.; Wang, R.; Huang, Q.; Yang, S.; Yuan, J.; Su, C.; Li, X.; Su, A.; et al. 2023. TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT. arXiv preprint arXiv:2307.08674. Zheng, L.; Chiang, W.-L.; Sheng, Y.; Zhuang, S.; Wu, Z.; Zhuang, Y.; Lin, Z.; Li, Z.; Li, D.; Xing, E. P.; Judg- Zhang, H.; Gonzalez, J. E.; and Stoica, I. 2023. ing LLM-as-a-judge with MT-Bench and Chatbot Arena. arXiv:2306.05685. Zhong, W.; Cui, R.; Guo, Y.; Liang, Y.; Lu, S.; Wang, Y.; Saied, A.; Chen, W.; and Duan, N. 2023.
2309.09150#51
2309.09150#53
2309.09150
[ "2204.02311" ]
2309.09150#53
Can Large Language Models Understand Real-World Complex Instructions?
Agieval: A human- centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364. Zhou, C.; Liu, P.; Xu, P.; Iyer, S.; Sun, J.; Mao, Y.; Ma, X.; Efrat, A.; Yu, P.; Yu, L.; et al. 2023a. Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206.
2309.09150#52
2309.09150#54
2309.09150
[ "2204.02311" ]
2309.09150#54
Can Large Language Models Understand Real-World Complex Instructions?
Zhou, W.; Jiang, Y. E.; Wilcox, E.; Cotterell, R.; and Sachan, M. 2023b. Controlled text generation with natural language instructions. arXiv preprint arXiv:2304.14293. # Data Evolution As introduced in the Data Evolution part, we diversify the collected complex instructions through In-breadth Evolu- tion and complicate the simple instructions via In-depth Evolution. In-breadth Evolution involves (1) Task Descrip- tion Relocation, (2) Task Description Paraphrasing, and (3) Task Emulation, while In-depth Evolution involves (4) Con- straints Addition and (5) Multi-round Interaction. Overall, we design several prompts to enhance the complexity and diversity of the data for various tasks. # In-breadth Evolution We mainly design three prompts to diversify the data in Planning, QA, and Summarization tasks respectively. Planning We apply the Task Emulation strategy when di- versifying the data in the Planning task. The prompts are shown in Tab. 6, which mainly consists of two phases. Dur- ing phase one, GPT-3.5-turbo is required to generate spe- cific Task Description and corresponding Tools Descriptions based on the theme provided by the user (e.g. maths in the given example). The Tools Descriptions encompass each toolâ s name, a brief introduction, and the required input pa- rameters. During phase two, GPT-3.5-turbo is required to provide the planning process given the Task Description and corresponding Tools Descriptions generated in phase one. The planning process consists of four main parts: the Task Description, Tools Descriptions, Output Format, and Histo- ries. An example of the Instruction generated from this two- phase prompt is shown in Tab. 7.
2309.09150#53
2309.09150#55
2309.09150
[ "2204.02311" ]
2309.09150#55
Can Large Language Models Understand Real-World Complex Instructions?
It is worth noting that we acknowledge GPT-3.5-turbo is far from a perfect automated agent (Liu et al. 2023b). In or- der to ensure the quality of the generated data, as depicted in Table 7, we manually enter the correct return values of the tool to ensure that both the planning process and results in the histories are accurate. Summarization The prompt we use to diversify the data in the Summarization task is shown in Tab. 8. We present various underlying principles for designing task descrip- tions for Summarization task in our prompt. These princi- ples mainly employ the Task Description Relocation and Task Description Paraphrasing strategies. We finally gen- erate task descriptions for a total of 100 input text provided. QA The prompt utilized to diversify the data in the QA task is shown in Tab. 9. In order to enhance the diversity of task descriptions, we require the model to generate a wider range of questions when provided with a given input text. Here, our prompt primarily employs strategies such as Task Description Relocation and Task Description Paraphrasing.
2309.09150#54
2309.09150#56
2309.09150
[ "2204.02311" ]
2309.09150#56
Can Large Language Models Understand Real-World Complex Instructions?
# In-depth Evolution We design two prompts to complicate the simple instruc- tions collected regrading the Well-guided Writing and Brain- storming task. Both prompts utilize the Constraints Addition and Multi-round Interaction strategies. Well-guided Writing The prompt to increase the com- plexity of the basic instruction in the Well-guided Writing task can be seen in Tab. 10. In order to simulate human- like multi-round modifications during the writing process, we define three atomic operations: (1) Count Limit estab- lishes clear requirements for word or sentence count. (2) Specification involves specifying crucial details such as key- words, hashtags, and URLs to ensure precise alignment with specific needs. (3) Revision involves proposing dynamic and objective amendments to enhance the writing style. By em- ploying these operations, the requirements can be more spe- cific, leading to more effective guidance for the generated results. We ensure that any modifications introduced are ob- jective and can be evaluated automatically. These atomic op- erations can be reused during the composition process. Brainstorming The prompt that we design for enhancing the complexity of simple instruction in the Brainstorming task is shown in Tab. 11 We define two atomic operations to mimic the human thinking process: (1) Modification in- cludes altering the output format such as JSON, XML, CSV, Markdown table, Python list, numeric sequence, etc. Addi- tionally, word, sentence, or sample count limits can be im- posed. Key information like keywords, hashtags, URLs, and language can also be incorporated into the instruction. (2) Specification Further inquire about the specific details or ask for more information.
2309.09150#55
2309.09150#57
2309.09150
[ "2204.02311" ]
2309.09150#57
Can Large Language Models Understand Real-World Complex Instructions?
The GPT-3.5-turbo can simulate hu- man thought processes by combining the two atomic opera- tions. The history of multiple calls to these operations can be aggregated into multi-turn dialogues. The final evolved in- structions shown in the prompt can serve as complex single- turn instructions, challenging the model to accomplish mul- tiple tasks within a single round of instruction. Scoring Keywords Annotation We propose four criteria for complex instruction understand- ing, namely Count Limit, Answer Format, Task-prescribed phrases, and Input-dependent query, as introduced in our evaluation system. mong these criteria, the latter three in- volve the annotation of scoring keywords. For Answer For- mat, objective keywords such as â {â , and â }â
2309.09150#56
2309.09150#58
2309.09150
[ "2204.02311" ]
2309.09150#58
Can Large Language Models Understand Real-World Complex Instructions?
are directly an- notated by humans. For Task-prescribed phrases and Input- dependent query, we employ a collaborative approach with GPT4 and humans. For Task-prescribed phrases, we require GPT4 to extract key phrases related to the task objective di- rectly from the task description, such as keywords and pre- defined functions. For Input-dependent query, we ask GPT4 to answer the instruction first and then summarize the key- words of its answer that are relevant to the input text. Fi- nally, the annotations by three evaluators are checked and supplemented, and only keywords covered by two or more evaluators are included in the final label set. Models We present the details of our evaluated models in Table 5. Overall, we evaluate 19 Chinese-oriented models and 15 English-oriented models. The difference between Chinese- oriented models and English-oriented models lie in the pro- portion of Chinese data in their pretraining corpus.
2309.09150#57
2309.09150#59
2309.09150
[ "2204.02311" ]
2309.09150#59
Can Large Language Models Understand Real-World Complex Instructions?
Among Model Base Model Size Vocabulary Expansion Supported Context Length # IFT samples Chinese-oriented Models (From Scratch) InternLM-chat-7B BatGPT Qwen-7B Baichuan-Base InternLM (Team 2023) BatGPT-sirius (Li et al. 2023c) Qwen1 Baichuan-chat2 7B 15B 7B 13B 16B 6B 6B 6B ChatGLM (Zeng et al. 2023) ChatGLM2 (Zeng et al. 2023) ChatGLM2-32k (Zeng et al. 2023) ChatGLM-6B ChatGLM-6B ChatGLM-6B N/A N/A N/A N/A N/A N/A N/A N/A 8k 32k 8k 4k 2k 2k 8k 32k 500w â â â 110w â â â
2309.09150#58
2309.09150#60
2309.09150
[ "2204.02311" ]
2309.09150#60
Can Large Language Models Understand Real-World Complex Instructions?
Chinese-oriented Models (Continue Pretraining) F F T T F F T T Llama1 BLOOMZ-7B1-mt Llama1 Llama1 Llama2 Llama2 Llama2 Llama2 7B, 13B 7B 7B, 13B, 33B 13B 7B 7B 7B 13B 2k 1k 8k 2k 4k 4k 4k 4k 5w 200w 200w, 300w, 430w 110w 1000w â 120w 100w English-oriented Models Llama2-chat (Touvron et al. 2023) Vicuna-V1.3 (Zheng et al. 2023) Vicuna-V1.5 (Zheng et al. 2023) WizardLM (Xu et al. 2023b) LongChat-V1 (Li* et al. 2023) LongChat-V1.5 (Li* et al. 2023) OpenChat-V3.2 (Wang et al. 2023a) GPT-3.5-turbo GPT-4 Llama2 Llama1 Llama2 Llama1 Llama1 Llama2 Llama2 - - 7B, 13B, 70B 7B, 13B, 33B 7B, 13B 13B 7B, 13B 7B 13B - - N/A N/A N/A N/A N/A N/A N/A N/A N/A 4k 2k 16k 2k 16k 32k 4k 16k 16k 10w 12w 12w 25w 8w, 2w â 0.6w â â
2309.09150#59
2309.09150#61
2309.09150
[ "2204.02311" ]
2309.09150#61
Can Large Language Models Understand Real-World Complex Instructions?
RLHF T T F F F T T T F F F F F F F F T F F F F F F T T Table 5: Models evaluated in this paper. The symbols â -â and â denote that details are undisclosed. Vocabulary Expansion indicates whether Chinese-oriented Models (Continue Pretraining) have expanded their vocabulary to include Chinese characters. # IFT samples denotes the number of samples used in the instruction tuning phase. The RLHF column indicates whether the model adopts reinforcement learning with human feedback. them, Chinese-oriented models are further categorized based on whether they are trained from scratch (From scratch, FS) or continue pretraining from English-oriented models (Continue Pretraining, CP). We provide details on their base model, model size, supported context length, the number of samples used in the instruction tuning phase, whether they adopt reinforcement learning with human feedback, and whether the Chinese-oriented model (CP) has expanded the Chinese characters in its vocabulary. 1https://huggingface.co/Qwen/Qwen-7B 2https://huggingface.co/baichuan-inc/Baichuan-13B-Chat 3https://huggingface.co/Abbey4799/kw-cutegpt-13b-ift-lora 4https://huggingface.co/LinkSoul/Chinese-Llama-2-7b 5https://huggingface.co/FlagAlpha/Llama2-Chinese-7b-Chat 6https://huggingface.co/Linly-AI/Chinese-LLaMA-2-7B-hf 7https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-
2309.09150#60
2309.09150#62
2309.09150
[ "2204.02311" ]
2309.09150#62
Can Large Language Models Understand Real-World Complex Instructions?
v8.1-fp16 # I: Task & Tools Descriptions Generation /* Task prompt */ Suppose youâ re a good planner for designing complex planning tasks in maths and provide some implicitly useful tools for solving the problem. Your task is to design tasks that need multi-step operations and thoughts and design tools that can help users to solve the problem. /* Output Format */ You should return the answer in the format as described { â taskâ : â <a brief task description>â , â toolsâ : [ { â nameâ : â <tool name>â , â descriptionâ : â <tool description>â , â inputâ : { â <name >â : â <value >â , ... }}, ... ] } /* Example */ For example: { â Taskâ : â
2309.09150#61
2309.09150#63
2309.09150
[ "2204.02311" ]
2309.09150#63
Can Large Language Models Understand Real-World Complex Instructions?
You are an AI that helps users book flights. Ask the user for their travel plans, then show them flights, and book the flights they select.â , â Toolsâ : [ { â nameâ : â findFlightsâ , â descriptionâ : â searches for available flightsâ , â inputâ : { â Originâ : â <airport code>â , â Destinationâ : â <airport code>â , â DepartureDateâ : â <date>â , # â ReturnDateâ : â <date>â , â Passengersâ : â <count>â } }, .. ] } # II:
2309.09150#62
2309.09150#64
2309.09150
[ "2204.02311" ]
2309.09150#64
Can Large Language Models Understand Real-World Complex Instructions?
Planning Process Generation /* Task Description */ [Task Description from Phase 1]. /* Tools Descriptions */ [Tools Descriptions from Phase 1]. /* Output Format */ You should only respond in JSON format as described below Response Format: { { â thoughtsâ : { â thoughtâ : â <your current thought>â , â reasoningâ : â <self reflect on why you made this decisionâ , â planâ : â short bulleted list that conveys long-term planâ }, â commandâ : { â nameâ : â command nameâ , â inputâ : { â <name>â : â <value>â } },
2309.09150#63
2309.09150#65
2309.09150
[ "2204.02311" ]
2309.09150#65
Can Large Language Models Understand Real-World Complex Instructions?
} Ensure the response can be parsed by Python json.loads /* Histories */ And then the system will execute the command and give you the result and log the execution history below. Please mind the history and the given result. System: This reminds you of these events from your past: [History] Human: Stay focused on the history and determine which next command to use, and respond using the format specified above: Table 6: The prompts for diversifying the data in the Planning task during the Data Evolution process. Overall, the data evolution for the Planning task consists of two phases: Tools & Task Description Generation and Planning Process Generation.
2309.09150#64
2309.09150#66
2309.09150
[ "2204.02311" ]
2309.09150#66
Can Large Language Models Understand Real-World Complex Instructions?
The information that requires manual input is highlighted. An example of the Instruction generated from this two-phase prompt is shown in Tab. 7. /* Task Description */ Design a task to find the area of a triangle and provide tools to assist with the calculations. /* Tools Descriptions */ Tools: [ â inputâ : { â areaâ : â <calculated area>â }}, { â nameâ : â calculateAreaâ , â sideCâ : â <length of side C>â }}, ] /* Output Format */ You should only respond in JSON format as described below Response Format: { { â
2309.09150#65
2309.09150#67
2309.09150
[ "2204.02311" ]
2309.09150#67
Can Large Language Models Understand Real-World Complex Instructions?
thoughtsâ : { â thoughtâ : â <your current thought>â , â reasoningâ : â <self reflect on why you made this decision>â , â planâ : â short bulleted list that conveys long-term planâ }, â commandâ : { â nameâ : â command nameâ , â inputâ : { â <name>â : â <value>â } }, } Ensure the response can be parsed by Python json.loads /* Histories */ System: This reminds you of these events from your past: [ { â Replyâ : { â thoughtsâ : { ], â commandâ : { â nameâ : â inputTriangleSidesâ , â inputâ : {}} }, }, â Resultâ : â
2309.09150#66
2309.09150#68
2309.09150
[ "2204.02311" ]
2309.09150#68
Can Large Language Models Understand Real-World Complex Instructions?
I have a triangle with side lengths 5, 12, 13â }, { â Replyâ : { â thoughtsâ : { {â nameâ : â calculateSemiPerimeterâ , â descriptionâ : â calculates the semi-perimeter of the triangleâ , â inputâ : { â sideAâ : â <length of side A>â , â sideBâ : â <length of side B>â , â sideCâ : â <length of side C>â }}, { â nameâ : â displayAreaâ , â descriptionâ : â displays the calculated area of the triangle to the userâ , â descriptionâ : â
2309.09150#67
2309.09150#69
2309.09150
[ "2204.02311" ]
2309.09150#69
Can Large Language Models Understand Real-World Complex Instructions?
calculates the area of the triangle using Heronâ s formulaâ , â inputâ : { â semiPerimeterâ : â <semi-perimeter of the triangle>â , â sideAâ : â <length of side A>â , â sideBâ : â <length of side B>â , { â nameâ : â inputTriangleSidesâ , â descriptionâ : â asks the user for the lengths of the three sides of the triangleâ , â inputâ : {}} â thoughtâ : â
2309.09150#68
2309.09150#70
2309.09150
[ "2204.02311" ]
2309.09150#70
Can Large Language Models Understand Real-World Complex Instructions?
I can start by asking for the lengths of the three sides of the triangle using the â inputTriangleSidesâ command.â , â reasoningâ : â By asking for the lengths of the sides, I can obtain the necessary input to calculate the area of the triangle.â , â planâ : [ â Ask the user for the lengths of the three sides of the triangle using the â inputTriangleSidesâ command. â â thoughtâ : â I can calculate the semi-perimeter of the triangle using the â calculateSemiPerimeterâ
2309.09150#69
2309.09150#71
2309.09150
[ "2204.02311" ]
2309.09150#71
Can Large Language Models Understand Real-World Complex Instructions?
command.â , â reasoningâ : â By calculating the semi-perimeter, I can then use it to calculate the area of the triangle using Heronâ s formula.â , â planâ : [ â Calculate the semi-perimeter of the triangle using the â calculateSemiPerimeterâ command. â ], â commandâ : { â nameâ : â calculateSemiPerimeterâ , â inputâ : { â sideAâ : 5, â sideBâ : 12, â sideCâ : 13}} }, }, â Resultâ : â 15â } ] Human: Determine which next command to use, and respond using the format specified above: Table 7: The newly generated Instruction for the Planning task during data evolution, derived from the two-phase prompts in Tab. 6. The information that requires manual input is highlighted. You are a task generator, and your role is to create a task description to describe the task of summarizing customer service conversations. You can generate the following task descriptions: 1. Given the conversation records between the customer service agent (A) and the user (Q), please summarize the content of the dialogue and list the main points. 2. Summarize the key information in the conversation records between customer service agent (A) and the user (Q). 3. For the provided conversation records between the customer service agent (A) and the user (Q), summarize the dialogue content and 3. For the provided conversation records between the customer service agent (A) and the user (Q), summarize the dialogue content and list the main points. Describe the issues and solutions between the customer service agent and the user, including the userâ s questions, the agentâ s answers, and the solutions. At the same time, summarize the key information from the conversation records. list the main points. Describe the issues and solutions between the customer service agent and the user, including the userâ s questions, the agentâ s answers, and the solutions. At the same time, summarize the key information from the conversation records. 4. Please analyze and summarize the provided conversation records between the customer service agent (A) and the user (Q), describe the issues raised by the user, and the agentâ s responses and solutions, and identify the key information in the dialogue.
2309.09150#70
2309.09150#72
2309.09150
[ "2204.02311" ]
2309.09150#72
Can Large Language Models Understand Real-World Complex Instructions?
5. Based on the conversation records between the customer service agent (A) and the user (Q), organize the main content of the dialogue and summarize the key information and solutions. Table 8: The prompts for diversifying the data in the Summarization task during the Data Evolution process. You are a question-generation agent that can pose multiple questions in line with a given text description, and these questions should also have a certain level of difficulty. Based on the provided text, pose questions that align with its description. The answers to the questions should be found within the text, and they shouldnâ t be explicitly stated; Instead, they should require inference to deduce.
2309.09150#71
2309.09150#73
2309.09150
[ "2204.02311" ]
2309.09150#73
Can Large Language Models Understand Real-World Complex Instructions?
Table 9: The prompts for diversifying the data in the QA task during the Data Evolution process. /* Task Prompt */ As a skilled writer, your objective is to effectively achieve a simple writing goal by implementing the following strategies: 1. Precisely Define Requirements: Continuously elevate the accuracy and specificity of your requirements to effectively guide the generated results. 2. Objective Revisions: When introducing modifications, ensure that they are objective and amenable to automated evaluation. Avoid subjective and vague instructions, to maintain a consistent and coherent tone. /* Defined Atomic Operations */ Additionally, you have the flexibility to combine various operations to fine-tune the output: 1.â
2309.09150#72
2309.09150#74
2309.09150
[ "2204.02311" ]
2309.09150#74
Can Large Language Models Understand Real-World Complex Instructions?
Count Limitâ : Establish clear word or sentence count requirements, allowing you to strike the right balance between conciseness and comprehensiveness. 2.â Specificationâ : Specify crucial details like keywords, hashtags, and URLs to align the writing precisely with your specific needs. 3.â Revisionâ : Propose dynamic and objective amendments to enhance the writing style. By following these guidelines, you can harness the full potential of AI-generated content and accomplish your writing objectives with precision and excellence. /* Output Format */ To fulfill this task, you are expected to provide your responses in the following JSON format: { â
2309.09150#73
2309.09150#75
2309.09150
[ "2204.02311" ]
2309.09150#75
Can Large Language Models Understand Real-World Complex Instructions?
Operationsâ : [ { â operationâ : <â Count limitâ , â Specificationâ or â Revisionâ >, â thoughtsâ : <Your thinking process>, â takewaysâ : <Briefly summarize your thought process into a short instruction> } ] } /* Histories */ Input: Create a summary for a given article. [An article] Output: { â Operationsâ : [ { â operationâ : â Count limitâ , â thoughtsâ : â Iâ d like the summary to be neither too concise nor excessively lengthy, so Iâ d prefer to limit it to three sentences.â , â takewaysâ : â Limit the length to three sentences.â }, { â operationâ : â Revisionâ , â thoughtsâ : â The response might be too short and plain.â , â takewaysâ : â The response could benefit from a touch of eloquence.â }, { â operationâ : â Specificationâ , â thoughtsâ : â I should define a set of keywords that can better guide the summary.â , â takewaysâ : â
2309.09150#74
2309.09150#76
2309.09150
[ "2204.02311" ]
2309.09150#76
Can Large Language Models Understand Real-World Complex Instructions?
Requesting retention of keywords: wildflowers, summer.â } ] /* Input */ Input: Craft an Instagram post caption for a photo of my dog and me playing at the beach. } Table 10: The prompt for enhancing the complexity of the simple instruction in the Well-guided Writing task during the Data Evolution process. Three atomic operations have been specifically defined to facilitate GPT-3.5-turbo in its ability to simulate human-like multi-round modifications during the writing process. These atomic operations can be reused.
2309.09150#75
2309.09150#77
2309.09150
[ "2204.02311" ]
2309.09150#77
Can Large Language Models Understand Real-World Complex Instructions?
/* Task Prompt */ As a thinker, when presented with a simple thinking problem, your objective is to simulate human thinking, following these steps: 1. Refine the requirements of the thinking questions to render the results more specific, intuitive, easily consultable and comprehensible. 2. Engage in multiple rounds of dialogue to continually probe and gain insights into the issue. /* Defined Atomic Operations */ You can combine the following operations: 1. â Modificationâ : Add, delete, modify the restrictions of the Evolved Instruction, including its output format (JSON, XML, CSV, Markdown table, Python list, Numeric sequence, etc.), imposing word/sentence/sample count limits, and incorporating key information (keywords, hashtags, URLs, etc.), language.
2309.09150#76
2309.09150#78
2309.09150
[ "2204.02311" ]
2309.09150#78
Can Large Language Models Understand Real-World Complex Instructions?
2. â Specificationâ : Further inquire about the specific details or ask for more information. /* Output Format */ To fulfill this task, you are expected to provide your responses in the following JSON format: { â Operationsâ : [ { â operationâ : <â Modificationâ or â Specificationâ >, â thoughtsâ : <Your thinking process>, â takewaysâ : <Briefly summarize your thought process into a short instruction> â evolved instructionâ : <A more complex instruction according to your selected operation> } ] } # /* Histories */ Input:
2309.09150#77
2309.09150#79
2309.09150
[ "2204.02311" ]
2309.09150#79
Can Large Language Models Understand Real-World Complex Instructions?
Provide five innovative or improved methods to solve everyday life problems. # Output: { â Operationsâ : [ { â operationâ : â Modificationâ , â thoughtsâ : â For easier readability, Iâ d like the output in the form of a Markdown table. Specifically, Iâ m interested in keywords, summaries, and steps for each method.â , â takewaysâ : [â Output in Markdown table formatâ , â Including keywords, summaries, and stepsâ ] â evolved instructionâ : [â
2309.09150#78
2309.09150#80
2309.09150
[ "2204.02311" ]
2309.09150#80
Can Large Language Models Understand Real-World Complex Instructions?
Present five innovative or improved methods for solving everyday life problems through Markdown table format, including keywords, introductions, and steps.â ] }, { â operationâ : â Modificationâ , â thoughtsâ : â The English version would be more convenient for me to read.â , â takewaysâ : [â Translate into English.â ] â evolved instructionâ : [â In Markdown table format, present five innovative or improved methods for solving everyday life problems, including keywords, summaries, and steps, and then translate into English.â
2309.09150#79
2309.09150#81
2309.09150
[ "2204.02311" ]
2309.09150#81
Can Large Language Models Understand Real-World Complex Instructions?
] } ] # /* Input */ Input: List three animals of different species. } Table 11: The prompt for enhancing the complexity of the simple instruction in the Brainstorming task during the Data Evolution process.
2309.09150#80
2309.09150
[ "2204.02311" ]
2309.09013#0
Bridging Dense and Sparse Maximum Inner Product Search
# Bridging Dense and Sparse Maximum Inner Product Search SEBASTIAN BRUCH, Pinecone, USA FRANCO MARIA NARDINI, ISTI-CNR, Italy AMIR INGBER, Pinecone, Israel EDO LIBERTY, Pinecone, USA 3 2 0 2 # EDO LIBERTY, Pinecone, USA Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-ð retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-ð
2309.09013#1
2309.09013
[ "2104.05740" ]
2309.09013#1
Bridging Dense and Sparse Maximum Inner Product Search
retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
2309.09013#0
2309.09013#2
2309.09013
[ "2104.05740" ]
2309.09013#2
Bridging Dense and Sparse Maximum Inner Product Search
# p e S 6 1 ] R I . s c [ # CCS Concepts: â ¢ Information systems â Retrieval models and ranking. 1 v 3 1 0 9 0 . 9 0 3 2 : v i X r a Additional Key Words and Phrases: Maximum Inner Product Search, Top-k Retrieval, Sparse Vectors, Dense Vectors, Hybrid Vectors, Sketching, IVF 1 INTRODUCTION Retrieval is one of the most fundamental questions in Information Retrieval (IR), as the name of the discipline itself reflects. Simply put, given a large number of objects, we wish to find, in an efficient manner, the closest subset of those objects to a query according to some notion of closeness. The data structure and algorithmic inventions [68, 83] that have emerged from the IR literature to address this deceptively simple question have had enormous impact on the field and birthed major research directions. They provide the machinery to scale ranking to massive datasets within multi-stage ranking systems [6, 7, 14, 40], for instance, or power large-scale applications, of which search is a notable and ubiquitous example. Much of the IR research on retrieval targets textual data, where documents and queries are texts in natural languages. Unsurprisingly, then, the retrieval machinery that exists today is highly optimized for data that is governed by the laws of natural languages (such as Zipfâ s law) and the way users interact with retrieval and search systems (e.g., by means of short, keyword queries). The inverted index [83], for example, is inspired by how we historically organized and found information in a book or at a library. Our measures of closeness, such as TF-IDF and BM25 [62], rely on statistics that reflect our understanding of the relevance between two pieces of text. The dynamic pruning algorithms that help us traverse inverted indexes efficiently [11, 18, 23, 41, 47, 53, 59, 68] to find the top ð most relevant documents to a query, too, rely on the statistical properties of language and relevance measures.
2309.09013#1
2309.09013#3
2309.09013
[ "2104.05740" ]
2309.09013#3
Bridging Dense and Sparse Maximum Inner Product Search
Authorsâ addresses: Sebastian Bruch, Pinecone, New York, NY, USA, [email protected]; Franco Maria Nardini, ISTI-CNR, Pisa, Italy, [email protected]; Amir Ingber, Pinecone, Tel Aviv, Israel, [email protected]; Edo Liberty, Pinecone, New York, NY, USA, [email protected]. 111 111:2 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty
2309.09013#2
2309.09013#4
2309.09013
[ "2104.05740" ]
2309.09013#4
Bridging Dense and Sparse Maximum Inner Product Search
While the form of retrieval above is the bedrock of flurry of other research and applications in IR, the rise of deep learning in recent years brought a different form of retrieval into the IR spotlight: Approximate Nearest Neighbor (ANN) search [28, 31, 32, 36, 50, 71] in dense vector spaces. ANN search has for decades played an outsize role in research problems that are adjacent to text retrieval such as image and multimedia retrieval [58, 80]. Its machinery is optimized for objects and queries that are real vectors in some high-dimensional space, and where closeness is determined by inner product or proper metrics such as Euclidean distance. Today, efficient and effective data structures and algorithms for this problem are often critical components in, among other applications, semantic search, where, using deep learning, we learn a vector representation of documents and queries in a space where closeness of vectors implies semantic similarity of their corresponding texts [40]. 1.1 Maximum Inner Product Search as the Unifying Problem The fact that these two branches of retrieval have historically progressed independently makes a great deal of sense: they have targeted quite different applications.
2309.09013#3
2309.09013#5
2309.09013
[ "2104.05740" ]
2309.09013#5
Bridging Dense and Sparse Maximum Inner Product Search
Todayâ s reality driven by the burgeoning role of deep learning in IR and the effectiveness of learnt representations in many related domains, however, begins to challenge the status quo. Let us illustrate our point by considering joint lexical-semantic search [12, 17, 34, 37, 44, 45, 72, 75] as an example. In that setup, documents and queries are represented as learnt vectors and as bags of words. Retrieval is then performed over both representations to find the documents that are both lexically and semantically close to a query. This application is at the confluence of (inverted index-based) top-ð retrieval and ANN search.
2309.09013#4
2309.09013#6
2309.09013
[ "2104.05740" ]
2309.09013#6
Bridging Dense and Sparse Maximum Inner Product Search
The challenge presented by the historical dichotomy is that researchers and practitioners alike must study and develop two disparate systems that are characteristically different. At the same time, we are witnessing the success of methods that learn term importance weights from texts [9, 19, 24â 26, 39, 51, 79, 82], rather than compute it based on term frequency and propensity. It has been shown that the weights learnt this way exhibit distributional properties that do not conform to the expectations of inverted-index based retrieval algorithms [16, 49]. This challenges some of the assumptions underlying dynamic pruning algorithms and thus the efficacy of inverted index-based retrieval in the face of arbitrarily-distributed term weights [16, 48]. The existing literature gives effective solutions of various degrees of complexity to each and every one of the shortcomings above [46, 49, 52, 75, 78]. In this work, we wish to investigate a more general question that arises if we returned to the principles and re-examined the most glaring fact: It should come as no surprise that both branches of retrieval operate on vectors and, often, attempt to solve Maximum Inner Product Search (MIPS). It just so happens that in one branch the vectors are dense (i.e., all coordinates are almost surely non-zero) and in the other sparse (i.e., where, relative to the dimensionality of the space, very few coordinates are non-zero).
2309.09013#5
2309.09013#7
2309.09013
[ "2104.05740" ]
2309.09013#7
Bridging Dense and Sparse Maximum Inner Product Search
We call the former â dense MIPSâ and the latter â sparse MIPSâ for brevity. 1.2 Sparse MIPS as a Subclass of Dense MIPS It is clear that solutions devised for sparse MIPS are not immediately applicable to dense MIPS. That is because sparse MIPS algorithms operate under stricter distributional assumptions than dense MIPS algorithms do; in other words, the class of sparse vectors for which MIPS solutions exist is a subset of the class of dense vectors. For example, inverted index-based solutions are only efficient if the vectors are sparse1 and non-negative, and if their sparsity pattern takes on a Zipfian shape. Dense MIPS algorithms, on the other hand, have fewer inherent limitations.
2309.09013#6
2309.09013#8
2309.09013
[ "2104.05740" ]
2309.09013#8
Bridging Dense and Sparse Maximum Inner Product Search
A natural question 1In fact, query vectors are often required to be much more sparse than document vectors for a sparse MIPS solution to remain reasonably efficient. # Bridging Dense and Sparse Maximum Inner Product Search Algorithm 1: Indexing Input: Collection X of sparse vectors in Rð ; Number of clusters, ð ; Random projector, ð : Rð â Rð where ð â ª ð ; Clustering algorithm Cluster that returns partitions of input data and their representatives. Result: Cluster assignments Pð = { ð | ð ¥ ( ð ) â Partition ð } and cluster representatives Cð â s. Ë X â {ð (ð ¥) | ð ¥ â X} 1: 2: Partitions, Representatives â Cluster( Ë X; ð ) 3: Pð â { ð | Ë ð ¥ ( ð ) â Partitions[ð ]}, â 1 â ¤ ð â ¤ ð 4: Cð â Representatives[ð ], â 1 â ¤ ð â ¤ ð
2309.09013#7
2309.09013#9
2309.09013
[ "2104.05740" ]
2309.09013#9
Bridging Dense and Sparse Maximum Inner Product Search
5: return P and C that arises given the observation above is whether dense MIPS algorithms remain effective and efficient when applied to sparse vectors. That is the primary motivation behind this study. While conceptually simple and admittedly pedestrian, applying dense MIPS solutions to sparse vectors faces many challenges. And therein lies our technical contribution: We present, as a proof of concept, the machinery that enables such a formulation. We start by foregoing exactness and instead developing ideas on the principle of probably approximately correctness (PAC). In other words, instead of insisting on finding the exact set of top ð documents, we settle with an approximate set that may erroneously contain some farther-afield documents and mistakenly miss other close-by documents. In the IR literature, this is the familiar notion of rank-unsafe retrieval [68]. Having accepted some (quantifiable) error in the retrieval outcome, we are faced with the next, rather debilitating challenge of working with often extremely high dimensional sparse vectors. It is here that we appeal to results from related disciplines that study data-oblivious â 2-subspace embedding [73] and non-linear sketching2 (itself sparse) of sparse vectors [16]. These dimensionality reduction techniques use the elegant yet simple idea of random projections to preserve Euclidean distance or inner product between vectors. To understand the ramifications of reducing dimensions (and thereby losing information) for sparse MIPS, we study the behavior of two particular random projection techniques when applied to sparse vectors: the linear Johnson-Lindenstrauss (JL) [1â 4, 33] transform and the non-linear Sinnamon [16] transform. We study this particular topic in depth in Section 4. By projecting sparse high-dimensional vectors into a (possibly dense) low-dimensional subspace, we have removed the main barrier to applying dense MIPS solutions to sparse vectors and are therefore prepared to investigate our main research question above. We are particularly interested in a method commonly known as Inverted File-based (IVF) retrieval: It begins by clustering vectors into partitions in an unsupervised manner. When it receives a query vector, it identifies a subset of the more â
2309.09013#8
2309.09013#10
2309.09013
[ "2104.05740" ]
2309.09013#10
Bridging Dense and Sparse Maximum Inner Product Search
promisingâ partitions, and conducts (exact or approximate) retrieval only over the subset of documents assigned to them. The search over the sub-collection can be delegated to another MIPS algorithm, the most naïve of which is an exhaustive, exact search. To understand how (sketches of) sparse vectors behave in an IVF retrieval system, we empirically evaluate standard and spherical KMeans [21] on a range of datasets. This analysis is the main topic of Section 5. Together, dimensionality reduction via random projections and clustering, enable the IVF para- digm for sparse vectors. Algorithm 1 describes the end-to-end indexing procedure, and Algorithm 2
2309.09013#9
2309.09013#11
2309.09013
[ "2104.05740" ]
2309.09013#11
Bridging Dense and Sparse Maximum Inner Product Search
2We use â sketchâ to describe a compressed representation of a high-dimensional vector, and â to sketchâ to describe the act of compressing a vector into a sketch. 111:3 111:3 111:4 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty Algorithm 2: Retrieval Input: Sparse query vector, ð â Rð ; Clusters and representatives, P, C obtained from Algorithm 1; Random projector ð : Rð â Rð where ð â ª ð ; Number of data points to examine, â â ¤ |X|, where |X| denotes the size of the collection; MIPS sub-algorithm R. Result: Approximate set of top ð vectors that maximize inner product with ð
2309.09013#10
2309.09013#12
2309.09013
[ "2104.05740" ]
2309.09013#12
Bridging Dense and Sparse Maximum Inner Product Search
. 1: 2: SortedClusters â SortDescending(P by â ¨ Ë ð , Cð â ©) 3: TotalSize â 0 4: I â â ; 5: for Pð ð â SortedClusters do 6: 7: 8: 9: end for 10: return Top ð vectors from partitions PI â {Pð | ð â I} w.r.t â ¨ð , ·⠩ using R gives details of the retrieval logic. We encourage the reader to refer to Section 3 for an overview of our adopted notation. 1.3 Research Byproducts As we demonstrate, it is certainly feasible andâ given an appropriate tolerance for errorâ often effective, to apply Algorithms 1 and 2 to sparse vectors. That possibility immediately leads to two important observations that we explore later in this work. First, we remark that, in effect, clustering a document collection and performing search over only a fraction of the resulting clusters, constitutes a dynamic pruning methodâ
2309.09013#11
2309.09013#13
2309.09013
[ "2104.05740" ]
2309.09013#13
Bridging Dense and Sparse Maximum Inner Product Search
albeit a rank-unsafe one. We use this insight to propose an organization of the inverted index where inverted lists comprise of blocks, with each block containing documents that fall into the same partition, and sorted by partition identifier. We show that, appropriately using skip pointers over inverted lists facilitates fast approximate top-ð retrieval for general sparse vectorsâ vectors that need not conform to any distributional requirements. Experiments confirm the efficiency and effectiveness of our proposal. Secondly, we offer a fresh but natural perspective to unify the two worlds of dense and sparse MIPS into a single, elegant framework at the systems level. In particular, we consider hybrid vectors (i.e., vectors that may contain dense and sparse subspaces) in an IVF retrieval system. We demonstrate empirically that the clusters formed by our proposal are effective, and, regardless of how the â 2 mass is split between the dense and sparse subspaces, retrieval can be arbitrarily accurate. 1.4 Contributions We summarize our contributions as follows:
2309.09013#12
2309.09013#14
2309.09013
[ "2104.05740" ]
2309.09013#14
Bridging Dense and Sparse Maximum Inner Product Search
â ¢ We analyze the effect of linear and non-linear random projection algorithms on the inner product approximation of sparse vectors; â ¢ We extend the clustering-based IVF method of dense MIPS to (sketches of) sparse vec- tors, and, in that context, empirically evaluate standard and spherical KMeans clustering algorithms; Bridging Dense and Sparse Maximum Inner Product Search â ¢ We use our findings to propose a novel organization of the inverted index that facilitates approximate MIPS over general sparse vectors, thereby freeing sparse MIPS from strict distributional requirements of traditional top-ð retrieval algorithms in IR; and,
2309.09013#13
2309.09013#15
2309.09013
[ "2104.05740" ]
2309.09013#15
Bridging Dense and Sparse Maximum Inner Product Search
â ¢ We propose a unification of dense and sparse MIPS using IVF, and present a preliminary empirical evaluation of the proposal. Throughout our presentation, we hope to convey the simplicity that our proposals provide in working with vectors, regardless of their density or sparsity, for both researchers and practitioners. But we are more excited by what this new perspective enables and the major research questions it inspires. To start, we believe our framework and the retrieval machinery it offers provide substantial flexibility to researchers who wish to study learnt term weights without the constraints imposed by traditional inverted index-based retrieval algorithms. We are equally encouraged by our initial findings on hybrid vector retrieval and hope our framework enables further research on lexical- semantic search, multi-modal retrieval, multimedia retrieval, and other domains. We additionally claim, as we argue later, that our proposed view opens the door to new and excit- ing research directions in IR, while, as a meta-algorithm, still allowing the incorporation of decades of research. From principled distributed system design, to the mathematics of alternative sparse vector sketching, to improved clustering or partitioning algorithms, our conceptual framework motivates a number of research questions to pursue. Moreover, our proposal gives a new flavor to the important research on efficient and effective systems in IR [13, 15]: the PAC nature of the framework offers intrinsic levers to trade off efficiency for effectiveness that deserve a thorough theoretical and empirical examination. 1.5 Structure The remainder of this manuscript is organized as follows. We review the relevant parts of the literature in Section 2. We then describe our notation and setup in Section 3. That will let us put in context our analysis and discussion of the behavior of linear and non-linear random projections for sparse vectors in Section 4, and subsequently clustering in Section 5. In Section 6, we show that clustering for IVF and dynamic pruning for inverted indexes are intimately connected, and describe a natural organization of the inverted index through clustering. We philosophize on a unified, density-agnostic framework for MIPS in Section 7. We conclude this manuscript in Section 8. 2 RELATED WORK This section sets the stage by briefly reviewing the literature on sparse and dense MIPS. 2.1 Sparse MIPS Numerous sparse MIPS algorithms exist in the IR literature that are specifically tailored to text data and that are behind the success of the field in scaling to massive text collections.
2309.09013#14
2309.09013#16
2309.09013
[ "2104.05740" ]
2309.09013#16
Bridging Dense and Sparse Maximum Inner Product Search
We refrain from reviewing this vast literature here and, instead, refer the reader to excellent existing surveys [68, 83] on the topic. But to give context to our work, we quickly make note of key algorithms and explain what makes them less than ideal for the setup we consider in this work. Sparse MIPS for Text Collections. MaxScore [69] and WAND [11], along with their intel- 2.1.1 lectual descendants [22, 23, 53, 54] are the de facto sparse MIPS algorithms, applied typically to vectors obtained obtained from a BM25-encoding [62] of text. This family of algorithms augment a document identifier-sorted inverted index with upper-bounds on the partial score contribution of each coordinate to the final inner product. With that additional statistic, it is possible to traverse the inverted lists one document at a time and decide if a document may possibly end up in the top ð set: if the document appears in enough inverted lists whose collective score upper-bound exceeds
2309.09013#15
2309.09013#17
2309.09013
[ "2104.05740" ]
2309.09013#17
Bridging Dense and Sparse Maximum Inner Product Search
111:5 111:6 Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty the current threshold (i.e., minimum of scores in the current top-ð set), then that document should be fully evaluated; otherwise, it has no prospect of ever making it to the top-ð set and can therefore be safely rejected. As articulated elsewhere [16], the logic above is effective when vectors have very specific properties: non-negativity, asymmetricly higher sparsity rate in queries, and a Zipfian distribution of the length of inverted lists. It should be noted that these assumptions are true of relevance measures such as BM25 [62]; sparse MIPS algorithms were designed for text distributions after all. The limitations of existing algorithms render them inefficient for the general case of sparse MIPS, where vectors may be real-valued and whose sparsity rate is closer to uniform across dimensions. That is because, coordinate upper-bounds become more uniform, leading to less effective pruning of the inverted lists. That, among other problems [16, 18], renders the particular dynamic pruning strategy in MaxScore and WAND ineffective, as demonstrated empirically in the past [16, 48].
2309.09013#16
2309.09013#18
2309.09013
[ "2104.05740" ]
2309.09013#18
Bridging Dense and Sparse Maximum Inner Product Search
Signatures for Logical Queries. There are alternatives to the inverted index, however, such 2.1.2 as the use of signatures for retrieval and sketches for inner product approximation [27, 61, 70]. In this class of algorithms, Goodwin et al. [27] describe the BitFunnel indexing machinery. BitFunnel stores a bit signature for every document vector in the index using Bloom filters. These signatures are scanned during retrieval to deduce if a document contains the terms of a conjunctive query. While it is encouraging that a signature-based replacement to inverted indexes appears not only viable but very much practical, the query logic BitFunnel supports is limited to logical ANDs and does not generalize to the setup we are considering in this work. Pratap et al. considered a simple algorithm [61] to sketch sparse binary vectors so that the inner product of sketches approximates the inner product of original vectors. They do so by randomly projecting each coordinate in the original space to coordinates in the sketch. When two or more non-zero coordinates collide, the sketch records their logical OR. While a later work extends this idea to categorical-valued vectors [70], it is not obvious how the proposed sketching mechanisms may be extended to real-valued vectors. 2.1.3 General Sparse MIPS. The most relevant work to ours is the recent study of general sparse MIPS by Bruch et al. [16]. Building on random projections, the authors proposed a sketching algorithm, dubbed Sinnamon, that embeds sparse vectors into a low-dimensional sparse subspace. Sinnamon, as with the previous approach, randomly projects coordinates from the original space to the sketch space. But the sketch space is a union of two subspaces: One that records the upper- bound on coordinate values and another that registers the lower-bound instead. It was shown that reconstructing a sparse vector from the sketch approximates inner product with any arbitrary query with high accuracy. Bruch et al. [16] couple the sketches with an inverted index, and empirically evaluate a coordinate- at-a-time algorithm for sparse MIPS. They show considerable compression rate in terms of the size of the index as well as latencies that are sometimes an order of magnitude better than WAND on embedding vectors produced by Splade [24, 25].
2309.09013#17
2309.09013#19
2309.09013
[ "2104.05740" ]
2309.09013#19
Bridging Dense and Sparse Maximum Inner Product Search
2.2 Dense MIPS Let us note that there exists an extremely vast body of works on approximate nearest neighbor (ANN) search that is in and of itself an interesting area of research. Strictly speaking, however, MIPS is a fundamentally different (and, in fact, a much harder) problem because inner product is not a proper metric; in fact, maximum cosine similarity search and ANN with Euclidean distance are special cases of MIPS. In spite of this, many MIPS solutions for dense vectors adapt ANN solutions to inner product, often without any theoretical justification. # Bridging Dense and Sparse Maximum Inner Product Search Consider, for example, the family of MIPS solutions that is based on proximity graphs such as IP-NSW [55] and its many derivatives [42, 65, 81]. These classes of algorithms construct a graph where each data point is a node in the graph and two nodes are connected if they are deemed â similar.â Typically, similarity is based on Euclidean distance. But the authors of [55] show that when one uses inner product (albeit improperly) to construct the graph, the resulting structure is nonetheless capable of finding the maximizers of inner product rather quickly and accurately. Graph-based methods may work well but they come with two serious issues. First, while we can reason about their performance in the Euclidean space, we can say very little about why they do or do not work for inner product, and under what conditions they may fail. It is difficult, for example, to settle on a configuration of hyperparameters without conducting extensive experiments and evaluation on a validation dataset. The second and even more limiting challenge is the poor scalability and slow index construction of graph methods. Another family of MIPS algorithms can best be described as different realizations of Locality Sensitive Hashing (LSH) [29, 30, 43, 56, 63, 64, 74, 77]. The idea is to project data points such that â similarâ points are placed into the same â bucket.â Doing so enables sublinear search because, during retrieval, we limit the search to the buckets that collide with the query. Many LSH methods for MIPS transform the problem to Euclidean or angular similarity search first, in order to then recycle existing hash functions.
2309.09013#18
2309.09013#20
2309.09013
[ "2104.05740" ]
2309.09013#20
Bridging Dense and Sparse Maximum Inner Product Search
One of the main challenges with this way of approaching MIPS is that inner product behaves oddly in high dimensions, in a way that is different from, say, Euclidean distance: the maximum inner product between vectors is typically much smaller than the average vector norm. Making LSH-based MIPS accurate requires an increasingly larger number of projections, which leads to an unreasonable growth in index size [67]. Another method that is borrowed from the ANN literature is search using an inverted file (IVF). This method takes advantage of the geometrical structure of vectors to break a large collection into smaller partitions. Points within each partition are expected to result in a similar inner product with an arbitrary query pointâ
2309.09013#19
2309.09013#21
2309.09013
[ "2104.05740" ]
2309.09013#21
Bridging Dense and Sparse Maximum Inner Product Search
though there are no theoretical guarantees that that phenomenon actually materializes. Despite that, clustering-based IVF is a simple and widely-adopted technique [31, 32], and has been shown to perform well for MIPS [8]. Its simplicity and well-understood behavior are the reasons we study this particular technique in this work. Finally, in our review of the dense MIPS literature, we exclusively described space partitioning algorithms that reduce the search space through some form of partitioning or hashing, or by organizing vectors in a graph structure and traversing the edges towards the nearest neighbors of a given query. It should be noted, however, that the other and often critical aspect of MIPS is the actual computation of inner product. There are many works that address that particular challenge often via quantization (see [28] and references therein) but that are beyond the scope of this article. 3 NOTATION AND EXPERIMENTAL SETUP We begin by laying out our notation and terminology. Furthermore, throughout this work, we often interleave theoretical and empirical analysis. To provide sufficient context for our arguments, this section additionally gives details on our empirical setup and evaluation measures.
2309.09013#20
2309.09013#22
2309.09013
[ "2104.05740" ]
2309.09013#22
Bridging Dense and Sparse Maximum Inner Product Search
3.1 Notation Suppose we have a collection X â Rð +ð of possibly hybrid vectors. That means, if ð ¥ â X, then ð ¥ is a vector that is comprised of an ð -dimensional dense, an ð -dimensional sparse array of coordinates, where dense and sparse are as defined in Section 1. We abuse terminology and call the dense part of ð ¥ its â dense vectorâ and denote it by ð ¥ð â Rð . Similarly, we call the sparse part, ð ¥ð â Rð , its â sparse vector.â We can write ð ¥ = ð ¥ð â ð ¥ð , where â denotes concatenation. 111:7 111:8
2309.09013#21
2309.09013#23
2309.09013
[ "2104.05740" ]
2309.09013#23
Bridging Dense and Sparse Maximum Inner Product Search
# Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty Table 1. Datasets of interest along with select statistics. The rightmost two columns report the average number of non-zero entries in documents and, in parentheses, queries for sparse vector representations of the datasets. Dataset Document Count Query Count Splade Efficient Splade MS Marco Passage NQ Quora HotpotQA Fever DBPedia 8.8M 2.68M 523K 5.23M 5.42M 4.63M 6,980 3,452 10,000 7,405 6,666 400 127 (49) 153 (51) 68 (65) 131 (59) 145 (67) 134 (49) 185 (5.9) 212 (8) 68 (8.9) 125 (13) 140 (8.6) 131 (5.9)
2309.09013#22
2309.09013#24
2309.09013
[ "2104.05740" ]
2309.09013#24
Bridging Dense and Sparse Maximum Inner Product Search
The delineation above will prove helpful later when we discuss the status quo and our proposal within one mathematical framework. Particularly, we can say that a sparse retrieval algorithm operates on the sparse collection Xð = {ð ¥ð | ð ¥ = ð ¥ð â ð ¥ð â X}, and similarly dense retrieval algorithms operate on Xð , defined symmetrically. Hybrid vectors collapse to dense vectors when ð = 0 (or when ð ¥ð = 0 for all ð ¥ â X), and reduce to sparse vectors when ð = 0 (or ð ¥ð = 0 â ð ¥ â X). (ð ) arg max ð ¥ â X to find, from X, the set S of top ð vectors whose inner product with the query vector ð = ð ð â ð ð â Rð +ð is maximal.
2309.09013#23
2309.09013#25
2309.09013
[ "2104.05740" ]
2309.09013#25
Bridging Dense and Sparse Maximum Inner Product Search
Sparse and dense MIPS are then special cases of the formulation above, when query and document vectors are restricted to their sparse or dense subspaces respectively. We write ð ð § (ð ¢) for the set of non-zero coordinates in a sparse vector, ð ð § (ð ¢) = {ð | ð ¢ð â 0}, and denote the average number of non-zero coordinates with ð = E[|ð ð § (ð )|] for a random vector ð . We denote coordinate ð of a vector ð ¢ using subscripts: ð ¢ð . To refer to the ð -th vector in a collection of vectors, we use superscripts: ð ¢ ( ð ) . We write â ¨ð ¢, ð £â © to express the inner product of two vectors ð ¢ and ð £. We denote the set of consecutive natural numbers {1, 2, . . . , ð } by [ð ] for brevity. Finally, we reserve capital letters to denote random variables (e.g., ð ) and calligraphic letters for sets (e.g., X). 3.2 Experimental Configuration 3.2.1 Datasets. We perform our empirical analysis on a number of publicly available datasets, summarized in Table 1. The largest dataset used in this work is the MS Marco3 Passage Retrieval v1 dataset [57], a retrieval and ranking collection from Microsoft. It consists of about 8.8 million short passages which, along with queries in natural language, originate from Bing. The queries are split into train, dev, and eval non-overlapping subsets. We use the small dev query set (consisting of 6,980 queries) in our analysis. We also experiment with 5 datasets from the BeIR [66] collection4: Natural Questions (NQ, question answering), Quora (duplicate detection), HotpotQA (question answering), Fever (fact extraction), and DBPedia (entity search). For a more detailed description of each dataset, we refer the reader to [66]. # 3Available at https://microsoft.github.io/msmarco/ 4Available at https://github.com/beir-cellar/beir # Bridging Dense and Sparse Maximum Inner Product Search Sparse Vectors. We convert the datasets above into sparse vectors by using Splade [24] and 3.2.2 Efficient Splade [38].
2309.09013#24
2309.09013#26
2309.09013
[ "2104.05740" ]
2309.09013#26
Bridging Dense and Sparse Maximum Inner Product Search
Splade5 [24] is a deep learning model that produces sparse representations for text. The vectors have roughly 30,000 dimensions, where each dimension corresponds to a term in the BERT [20] WordPiece [76] vocabulary. Non-zero entries in a vector reflect learnt term importance weights. Splade representations allow us to test the behavior of our algorithm on query vectors with a large number of non-zero entries. However, we also create another set of vectors using a more efficient variant of Splade, called Efficient Splade6 [38]. This model produces queries that have far fewer non-zero entries than the original Splade model, but documents that may have a larger number of non-zero entries. These two models give us a range of sparsity rates to work with and examine our algorithms on. As a way to compare and contrast the more pertinent properties of the learnt sparse representations, Table 1 shows the differences in the sparsity rate of the two embedding models for all datasets considered in this work.
2309.09013#25
2309.09013#27
2309.09013
[ "2104.05740" ]
2309.09013#27
Bridging Dense and Sparse Maximum Inner Product Search
3.2.3 Evaluation. Our main metric of interest is the accuracy7 of approximate algorithms, mea- sured as follows: For every test query, we obtain the exact solution to MIPS by exhaustively searching over the entire dataset. We then obtain approximate set of top-ð documents using a system of interest. Accuracy is then measured as the ratio of exact documents that are present in the approximate set. This metric helps us study the impact of the different sources of error. We also report throughput as queries per second (QPS) in a subset of our experiments where efficiency takes center stage. When computing QPS, we include the time elapsed from the moment query vectors are presented to the algorithm to the moment the algorithm returns the requested top ð document vectors for all queriesâ we emphasize that the algorithms used in this work do not operate in batch mode. We note that, because this work is a study of retrieval of vectors, we do not factor into throughput the time it takes to embed a given piece of text.
2309.09013#26
2309.09013#28
2309.09013
[ "2104.05740" ]
2309.09013#28
Bridging Dense and Sparse Maximum Inner Product Search
3.2.4 Hardware and Code. We conduct experiments on a commercially available platform with an Intel Xeon Platinum 8481C Processor (Sapphire Rapids) with a clock rate of 1.9GHz, 20 virtual CPUs (2 vCPUs per physical core), and 44GB of main memory. This setup represents a typical server in a production environmentâ in fact, we rented this machine from the Google Cloud Platform. We further note that, we implemented all the methods discussed in this work in the Rust programming language. We rely on the Rust compiler for any platform-specific optimization and do not otherwise optimize the code for the Intel platform (such as by developing SIMD code).
2309.09013#27
2309.09013#29
2309.09013
[ "2104.05740" ]
2309.09013#29
Bridging Dense and Sparse Maximum Inner Product Search
4 ANALYSIS OF RANDOM PROJECTIONS FOR SPARSE VECTORS As noted earlier, the historical bifurcation of the retrieval machinery can, in no small part, be attributed to the differences between sparse and dense vectorsâ in addition to the application domain. For example, sparse vectors are plagued with a much more serious case of the curse of dimensionality. In extremely high-dimensional spaces where one may have thousands to millions of dimensions, the geometrical properties and probabilistic certainty that power clustering start to break down. So does our intuition of the space.
2309.09013#28
2309.09013#30
2309.09013
[ "2104.05740" ]
2309.09013#30
Bridging Dense and Sparse Maximum Inner Product Search
5Pre-trained checkpoint from HuggingFace available at https://huggingface.co/naver/splade-cocondenser-ensembledistil 6Pre-trained checkpoints for document and query encoders were obtained from https://huggingface.co/naver/efficient- splade-V-large-doc and https://huggingface.co/naver/efficient-splade-V-large-query, respectively 7What we call â accuracyâ in this work is also known as â recallâ in the ANN literature. However, â recallâ
2309.09013#29
2309.09013#31
2309.09013
[ "2104.05740" ]
2309.09013#31
Bridging Dense and Sparse Maximum Inner Product Search
is an overloaded term in the IR literature as it also refers to the portion of relevant documents returned for a query. We use â accuracyâ instead to avoid that confusion. 111:9 111:10 Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty The high dimensionality of sparse vectors poses another challenge: greater computation required to perform basic operations. While optimized implementations (see, e.g., [35] and references therein) of spherical KMeans exist for sparse vectors, for example, their efficiency nonetheless grows with the number of dimensions. Standard KMeans is even more challenging: Cluster centroids are likely to be high-dimensional dense vectors, leading to orders of magnitude more computation to perform cluster assignments in each iteration of the algorithm.
2309.09013#30
2309.09013#32
2309.09013
[ "2104.05740" ]
2309.09013#32
Bridging Dense and Sparse Maximum Inner Product Search
These difficultiesâ computational complexity and geometrical odditiesâ pose a fundamental challenge to clustering over sparse vectors. That leads naturally to dimensionality reduction, and in particular sketching [73]: Summarizing a high-dimensional vector into a lower-dimensional space such that certain properties, such as the distance between points or inner products, are preserved with some quantifiable error. The reason sketching is appealing is that the mathematics behind it offer guarantees in an oblivious manner: with no further assumptions on the source and nature of the vectors themselves or their distribution. Additionally, sketching a vector is often fast since it is a requisite for their application in streaming algorithms. Finally, the resulting sketch in a (dense and) low-dimensional space facilitates faster subsequent computation in exchange for a controllable error. In this work, we explore two such sketching functions (ð (·) in the notation of Algorithm 1): One classical result that has powered much of the research on sketching is the linear Johnson- Lindenstrauss (JL) transform [33], which produces dense sketches of its input and enables computing an unbiased estimate of inner product (or Euclidean distance). Another, is the non-linear Sinnamon function [16] that produces sparse sketches of its input that enable deriving upper-bounds on inner product. In the remainder of this section, we review these two algorithms in depth and compare and contrast their performance. Importantly, we consider the approximation error in isolation: How does sketching affect MIPS if our MIPS algorithm itself were exact?
2309.09013#31
2309.09013#33
2309.09013
[ "2104.05740" ]
2309.09013#33
Bridging Dense and Sparse Maximum Inner Product Search
In other words, if we searched exhaustively for the top ð maximizers of inner product with a query, what accuracy may be expect if that search were performed on sketches of vectors versus the original vectors? 4.1 The Johnson-Lindenstrauss Transform 4.1.1 Review. Let us repeat the result due to Johnson and Lindenstrauss [33] for convenience: Lemma 4.1 (Johnson-Lindenstrauss). For 0 < ð < 1 and any set V of |V | points in Rð , and an integer ð = Ω(ð â 2 ln |V |), there exists a Lipschitz mapping ð : Rð â Rð such that (1 â ð )â ¥ð ¢ â ð £ â ¥2 2 â ¤ â ¥ð (ð ¢) â ð (ð £)â ¥2 2 â ¤ (1 + ð )â ¥ð ¢ â ð £ â ¥2 2, for all ð ¢, ð £ â V.
2309.09013#32
2309.09013#34
2309.09013
[ "2104.05740" ]
2309.09013#34
Bridging Dense and Sparse Maximum Inner Product Search
This result has been extensively studied and further developed since its introduction. Using simple proofs, for example, it can be shown that the mapping ð may be a linear transformation by an ð à ð random matrix Φ drawn from a certain class of distributions. Such a matrix Φ is said to form a JL transform [73]. There are many constructions of Φ that form a JL transform. It is trivial to show that when the entries of Φ are independently drawn from N (0, 1 ð ), then Φ is a JL transform with parameters (ð , ð ¿, ð ) if ð = Ω(ð â 2 ln(ð /ð ¿)). Φ = 1 ð , where ð ð à ð
2309.09013#33
2309.09013#35
2309.09013
[ "2104.05740" ]
2309.09013#35
Bridging Dense and Sparse Maximum Inner Product Search
is a matrix whose entries are independent â ð Rademacher random variables, is another simple-to-prove example of a JL transform. The literature offers a large number of other, more efficient constructions such as the Fast JL Transform [1], as well as specific theoretical results for sparse vectors (e.g., [10]). We refer the interested reader to [73] for an excellent survey of these results. Bridging Dense and Sparse Maximum Inner Product Search 4.1.2 Theoretical Analysis. In this work, we are interested in the transformation in the context of inner product rather than the â 2 norm and Euclidean distance. Let us take ð (ð ¢) = ð ð ¢, with ð }ð à ð , as one candidate sketching function in Algorithm 1 and state the following ð â {â 1/ results for our particular construction: Theorem 4.2. Fix two vectors ð ¢ and ð £ â Rð . Define ð Sketch = â ¨ð (ð ¢), ð (ð £)â © as the random variable representing the inner product of sketches of size ð , prepared using the projection ð (ð ¢) = ð ð ¢, with â ð }ð à ð being a random Rademacher matrix. ð Sketch is an unbiased estimator of ð â {â 1/ â ¨ð ¢, ð £â ©.
2309.09013#34
2309.09013#36
2309.09013
[ "2104.05740" ]
2309.09013#36
Bridging Dense and Sparse Maximum Inner Product Search
Its distribution tends to a Gaussian with variance: 1 ~ (Ilull3lloll; + (u, 0)? â 2)â ufo?) (2) We give our proof of the claim above in Appendix A. We next make the following claim for a fixed query vector ð and a random document vector, thereby taking it a step closer to the MIPS setup. We present a proof in Appendix B. Theorem 4.3. Fix a query vector ð â Rð and let ð be a random vector drawn according to the following probabilistic model. Coordinate ð , ð ð , is non-zero with probability ð ð > 0 and, if it is non- zero, draws its value from a distribution with mean ð and variance ð 2. ð Sketch = â ¨ð (ð ), ð (ð )â ©, with ð (ð ¢) = ð ð ¢ and ð â {â 1/ has expected value p>); pigi and variance: =[(u? + 0°) (lla >â ps ~ Dy pia) + 4° (Dap)? ~ (aie?) ]- (3) Consider the special case where p; = //N for some constant y for all dimensions i. Further assume, without loss of generality, that the (fixed) query vector has unit norm: ||q||z = 1. It can be observed that the variance of Zsxx1c1 decomposes into a term that is (u? + oâ )(1- 1/N)W/n, anda second term that is a function of 1/N*. The mean is a linear function of the non-zero coordinates in the query: (1); qi)/N. As N grows, the mean of Zgxercu tends to 0 at a rate proportional to the sparsity rate (y/N), while its variance tends to (yu? + 07) //n. The analysis above suggests that the ability of ð (·), as defined in this section, to preserve the inner product of a query vector with a randomly drawn document vector deteriorates as a function of the number of non-zero coordinates. For example, when the number of non-zero coordinates becomes larger, â
2309.09013#35
2309.09013#37
2309.09013
[ "2104.05740" ]
2309.09013#37
Bridging Dense and Sparse Maximum Inner Product Search
¨ð (ð ), ð (ð )â © for a fixed query ð and a random vector ð becomes less reliable because the variance of the approximation increases. Nonetheless, as we see later in this work, the degree of noise is often manageable in practice as evidenced by the accuracy of Algorithm 2. 4.2 The Sinnamon Transform 4.2.1 Review. Like JL transform, Sinnamon [16] aims to reduce the dimensionality of (sparse) vectors. Unlike JL transform, it does so through a non-linear mapping. Sinnamon uses half the sketch to record upper-bounds on the values of non-zero coordinates in a vector, and the other half to register lower-bounds. For notational convenience, let us assume that the sketch size is ð = 2ð
2309.09013#36
2309.09013#38
2309.09013
[ "2104.05740" ]
2309.09013#38
Bridging Dense and Sparse Maximum Inner Product Search
. Given a vector ð ¢ â Rð and â independent random mappings ð ð : [ð ] â [ð ] (1 â ¤ ð â ¤ â ), Sinnamon constructs the upper-bound sketch ð ¢ â Rð where its ð -th coordinate is assigned the following value: ð ¢ð â max {ð â ð ð § (ð ¢ ) | â ð s.t. ð ð (ð )=ð } ð ¢ð . (4) The lower-bound sketch, ð ¢, is filled in a symmetric manner, in the sense that the algorithmic procedure is the same but the operator changes from max(·) to min(·). 111:11 111:12
2309.09013#37
2309.09013#39
2309.09013
[ "2104.05740" ]
2309.09013#39
Bridging Dense and Sparse Maximum Inner Product Search
# Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty Computing the inner product between a query vector ð â Rð and a vector ð ¢ given its sketch (ð (ð ¢) = ð ¢ â ð ¢) uses the following procedure: Positive query values are multiplied by the least upper-bound from ð ¢, and negative query values by the greatest lower-bound from ð ¢: â ï¸ 1 i(1 min u,+1 max u,). 5 y ienz(w 4i(1 q.>0 ke{xoli) 1<o<h} § !~° pe(ng(i) 1<0<h} ur) (6) The indicator 1ð â ð ð § (ð ¢ ) , which is kept in conjunction with the sketch, guarantees that the partial inner product between a query coordinate ð ð and the sketch of a document vector (i.e., individual summands in Equation (5)) is 0 if ð â ð ð § (ð ¢).
2309.09013#38
2309.09013#40
2309.09013
[ "2104.05740" ]
2309.09013#40
Bridging Dense and Sparse Maximum Inner Product Search
That pairing of the sketch with the indicator function improves the bound on error dramatically while maintaining a large compression rate. For formal results on the probability of the inner product error, we refer the reader to the original work [16]. 4.2.2 Theoretical Analysis. In this work, we use a simplified instance of Sinnamon, which we call Weak Sinnamon, by (a) setting the number of random mappings to 1, which we denote by ð ; and (b) removing 1ð â ð ð § (ð
2309.09013#39
2309.09013#41
2309.09013
[ "2104.05740" ]
2309.09013#41
Bridging Dense and Sparse Maximum Inner Product Search
¢ ) from the inner product computation. These two reductions have important side effects that ultimately enable us to apply existing clustering algorithms and compute inner product between vectors. Let us focus on the upper-bound sketch to illustrate these differences; similar arguments can be made for the lower-bound sketch. First, notice that the upper-bound sketch of a document vector simplifies to ð ¢ where: ð ¢ð â max {ð â ð ð § (ð ¢ ) | ð (ð )=ð } ð ¢ð , (6) # ð ¢ð â and that the upper-bound sketch of a query vector, ð , becomes: â ï¸ ð ð â ð ð . {ð â ð ð § (ð ) | ð (ð )=ð â § ð ð >0} (7) We denote the former by ð ð (·) (for document) and the latter by ð ð (·) (for query). Second, the inner product computation between the sketches of query and document vectors reduces to:
2309.09013#40
2309.09013#42
2309.09013
[ "2104.05740" ]
2309.09013#42
Bridging Dense and Sparse Maximum Inner Product Search
â ï¸ â ï¸ â ¨ð ð (ð ), ð ð (ð ¢)â © = â ¨ð , ð ¢â © + â ¨ð , ð ¢â © = ð ð ð ¢ð (ð ) + ð ð ð ¢ð (ð ) . ð : ð ð >0 ð : ð ð <0 (8) We now extend the analysis in [16] to the setup above. We begin by stating the following claim that is trivially true: Theorem 4.4. For a query vector ð and document vector ð ¢, â ¨ð , ð ¢â © â ¤ â ¨ð ð (ð ), ð ð (ð ¢)â ©.
2309.09013#41
2309.09013#43
2309.09013
[ "2104.05740" ]
2309.09013#43
Bridging Dense and Sparse Maximum Inner Product Search
Importantly, the inner product between query and document sketches is not an unbiased esti- mator of the inner product between the original vectors. Let us now model the probability of the approximation error. Consider the upper-bound sketch first. Using a similar argument to Theorem 5.4 of [16], we state the following result and provide a proof in Appendix C: THEOREM 4.5. Let X be a random vector drawn according to the following probabilistic model. Coordinate i, X;, is non-zero with probability p; > 0 and, if it is non-zero, draws its value from a distribution with PDF ¢ and CDF ®.
2309.09013#42
2309.09013#44
2309.09013
[ "2104.05740" ]
2309.09013#44
Bridging Dense and Sparse Maximum Inner Product Search
Then: PIX qq) â X) $5] © (1â pi) (erm EPO) Zar) +p [ eo HUM) Dye Pig (aida (9) â « PIX qq) â X) $5] © (1â pi) (erm EPO) Zar) +p [ eo HUM) Dye Pig (aida (9) A symmetric argument can be made for the error of the lower-bound sketch. Crucially, given the result above, which formalizes the CDF of the sketching approximation error, we can obtain the expected value and variance of the random variables ð
2309.09013#43
2309.09013#45
2309.09013
[ "2104.05740" ]
2309.09013#45
Bridging Dense and Sparse Maximum Inner Product Search
ð (ð ) â ð ð and ð ð (ð ) â ð ð for all dimensions ð . # Bridging Dense and Sparse Maximum Inner Product Search From there, and following similar arguments as the proof of Theorem 5.8 of [16], it is easy to show that the approximation error takes on a Gaussian distribution with mean: â ï¸ â ï¸ ð ð E[ð ð (ð ) â ð ð ] + ð ð E[ð ð (ð ) â ð ð ] ð : ð ð >0 ð : ð ð <0 and variance that is: â ï¸ â ï¸ ð 2 ð Var [ð ð (ð ) â ð ð ] + ð 2 ð Var [ð ð (ð ) â ð ð ]. ð : ð ð >0 ð : ð ð <0 Let us illustrate the implications of Theorem 4.5 by considering the special case where p; = y//N for all dimensions i. As the sparsity rate increases and N grows, the second term in Equation (9) tends to 0 at a rate proportional to //N, while the first term dominates, tending approximately to exp ( â (1 â ©(5))/m). By making y//m smaller, we can control the approximation error and have it concentrate on smaller magnitudes. That subsequently translates to a more accurate inner product between a fixed query and a randomly drawn document vector. As a final remark on Weak Sinnamon, we note that when ð is larger than the number of non- zero coordinates in a document vector, the resulting sketch itself is sparse. Furthermore, sketching using Weak Sinnamon only requires O (ð ) operations, with ð denoting the number of non-zero coordinates, while the JL transform has a sketching complexity of O (ð ð ).
2309.09013#44
2309.09013#46
2309.09013
[ "2104.05740" ]
2309.09013#46
Bridging Dense and Sparse Maximum Inner Product Search
As we explain later, these properties will play a key role in the efficiency of sparse MIPS. 4.3 Empirical Comparison Our results from the preceding sections shed light on how JL and Weak Sinnamon transformations are expected to behave when applied to sparse vectors. Our main conclusion is that the sparsity rate heavily affects the approximation error. In this section, we design experiments that help us observe the expected behavior in practice and compare the two dimensionality reduction algorithms on real data. Given a sparse dataset and a set of queries, we first obtain the exact top-1 document for each query by performing an exhaustive search over the entire collection. We then create a second dataset wherein each vector is a sketch of a vector in the original dataset. We now perform exact search over the sketch dataset to obtain top-ð
2309.09013#45
2309.09013#47
2309.09013
[ "2104.05740" ]
2309.09013#47
Bridging Dense and Sparse Maximum Inner Product Search
â ² (ð â ² â ¥ 1) documents, and report the accuracy of the approximate retrieval. There are two parameters in the setup above that are of interest to us. First is the sketch size, ð . By fixing the dataset (thus its sparsity rate) but increasing the sketch size, we wish to empirically quantify the effect of using larger sketches on the ability of each algorithm to preserve inner product. Note that, because the vectors are non-negative, Weak Sinnamon only uses half the sketch capacity to form the upper-bound sketchâ reducing its effective sketch size to ð /2. The second factor is ð â ² which controls how â hardâ
2309.09013#46
2309.09013#48
2309.09013
[ "2104.05740" ]
2309.09013#48
Bridging Dense and Sparse Maximum Inner Product Search
a retrieval algorithm must work to compensate for the approximation error. Changing ð â ² helps us understand if the error introduced by a particular sketch size can be attenuated by simply retrieving more candidates and later re-ranking them according to their exact score. The results of our experiments are presented in Figure 1 for select datasets embedded with the Splade model. We chose these datasets because they have very different sizes and sparsity rates, as shown in Table 1, with Quora having the largest sparsity rate and fewest documents, and NQ the smallest sparsity rate and a medium collection size. Naturally, our observations are consistent with what the theoretical results predict. The sketch quality improves as its size increases. That shows the effect of the parameter ð on the approximation variance of the JL transform and the concentration of error in Weak Sinnamon sketches.
2309.09013#47
2309.09013#49
2309.09013
[ "2104.05740" ]
2309.09013#49
Bridging Dense and Sparse Maximum Inner Product Search
111:13 111:14 111:14 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty (a) Quora (b) NQ Fig. 1. Top-1 accuracy of retrieval for test queries over sketches produced by JL transform (left column), Weak Sinnamon (middle column), and, as a point of reference, the original Sinnamon algorithm (right column). We retrieve the top-ð â ² documents by performing an exhaustive search over the sketch collection and re-ranking the candidates by exact inner product to obtain the top-1 document and compute accuracy. Each line in the figures represents a different sketch size ð
2309.09013#48
2309.09013#50
2309.09013
[ "2104.05740" ]
2309.09013#50
Bridging Dense and Sparse Maximum Inner Product Search
. We note that Weak Sinnamon and Sinnamon only use half the sketch to record upper-bounds but leave the lower-bound sketch unused because Splade vectors are non-negative. That implies that their effective sketch size is half that of the JL transformâ s. Another unsurprising finding is that Weak Sinnamonâ s sensitivity to the ð /ð factor becomes evident in NQ: When the ratio between the number of non-zero coordinates and the sketch size (ð /ð ) is large, the variance of the approximation error becomes larger. The reason is twofold: more non-zero coordinates are likely to collide as vectors become more dense; and, additionally, sketches themselves become more dense, thereby increasing the likelihood of error for inactive coordinates. To contextualize Weak Sinnamon and the effects of our modifications to the original algorithm on the approximation error, we also plot in Figure 1 the performance of Sinnamon. While increasing the sketch size is one way to lower the probability of error, casting a wider net (i.e., ð
2309.09013#49
2309.09013#51
2309.09013
[ "2104.05740" ]
2309.09013#51
Bridging Dense and Sparse Maximum Inner Product Search
â ² > ð ) followed by re-ranking appears to also improve retrieval quality. Now that we have a better understanding of the effect of the parameters on the quality of the sketching algorithms, let us choose one configuration and repeat the experiments above on all our datasets. One noteworthy adjustment is that we set Weak Sinnamonâ s effective sketch size to match that of the JL transformâ s: As we noted, because Weak Sinnamon leaves the lower-bound sketch unused for non-negative vectors, we re-allocate it for the upper-bound sketch, in effect giving Weak Sinnamonâ s upper-bound sketch ð dimensions to work with.
2309.09013#50
2309.09013#52
2309.09013
[ "2104.05740" ]
2309.09013#52
Bridging Dense and Sparse Maximum Inner Product Search
Another change is that we use a more challenging configuration and perform top-10 retrieval. Finally, we also include Efficient Splade for completeness. # Bridging Dense and Sparse Maximum Inner Product Search (a) Splade (b) Efficient Splade Fig. 2. Top-10 accuracy of retrieval for test queries over sketches of size ð = 1024 produced by JL transform (left column), Weak Sinnamon (middle column), and, for reference, the original Sinnamon algorithm (right column). As in Figure 1, we retrieve the top-ð â ² documents by performing an exhaustive search over the sketch collection and re-ranking the candidates by exact inner product to obtain the top-10 documents and compute accuracy. Similarly, each line in the figures represents a different sketch size ð
2309.09013#51
2309.09013#53
2309.09013
[ "2104.05740" ]
2309.09013#53
Bridging Dense and Sparse Maximum Inner Product Search
. In these experiments, however, we adjust the effective sketch size of Weak Sinnamon and Sinnamon to match that of the JL transformâ s. Figure 2 shows the results of these experiments. The general trends observed in these figures are consistent with the findings of Figure 1: Obtaining a larger pool of candidates from sketches and re-ranking them according to their exact inner product is a reliable way of countering the approximation error; and, Weak Sinnamon generally underperforms the JL transform in preserving inner product between vectors. Additionally, as vectors become more dense, the sketching quality degrades, leading to a higher approximation error. Another interesting but expected phenomenon is that sketching performs comparatively poorly on Efficient Splade. That is because, query vectors generated by the Efficient Splade model are more sparse than those made by Splade. When a query has few non-zero coordinates, the expected inner product becomes small while the variance of JL transform sketches concentrates around a constant, as predicted by Theorem 4.3. As for Weak Sinnamon, when queries have a large number of non-zero coordinates, the shape of the distribution of error becomes less sensitive to the approximation error of individual coordinates; with fewer non-zero coordinates in the query vector, the opposite happens. As a final observation, we notice that retrieval accuracy is generally higher for Quora, MS Marco, and NQ datasets. That is easy to explain for Quora as it is a more sparse dataset with a much smaller ð /ð .
2309.09013#52
2309.09013#54
2309.09013
[ "2104.05740" ]
2309.09013#54
Bridging Dense and Sparse Maximum Inner Product Search
On the other hand, the observed trend is rather intriguing for a larger and more dense dataset such as MS Marco. On closer inspection, however, it appears that the stronger performance can be attributed to the probabilities of coordinates being non-zero (i.e., ð ð â s). In 111:15 111:15 111:16 111:16 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty (a) Splade (b) Efficient Splade Fig. 3. Probability of each coordinate being non-zero (ð ð for coordinate ð
2309.09013#53
2309.09013#55
2309.09013
[ "2104.05740" ]
2309.09013#55
Bridging Dense and Sparse Maximum Inner Product Search
) for Splade and Efficient Splade vectors of several datasets. To aid visualization, we sort the coordinates by ð ð â s in descending order. A Zipfian distribution would manifest as a line in the log-log plot. Notice that, this distribution is closer to uniform for MS Marco than others. Figure 3, we plot the distribution of ð ð â s but, to make the illustration cleaner, sort the coordinates by their ð ð in descending order. Interestingly, the distribution of ð ð â
2309.09013#54
2309.09013#56
2309.09013
[ "2104.05740" ]
2309.09013#56
Bridging Dense and Sparse Maximum Inner Product Search
s is closer to uniform for MS Marco and NQ, while it is more heavily skewed for Fever, DBPedia, and HotpotQA. 5 EVALUATION OF CLUSTERING OVER SKETCHES OF SPARSE VECTORS In the preceding section, we were squarely concerned with the ability of the two sketching al- gorithms in approximately preserving inner product between a query vector and an arbitrary document vector. That analysis is relevant if one were to directly operate on sketches as opposed to the original vectors when, say, building a graph-based nearest neighbor search index such as HNSW [50] or IP-NSW [55]. In this work, our primary use for sketches is to form partitions in the context of Algorithms 1 and 2: Whether R searches over sketches or the original vectors is left as a choice. In that framework, Section 4 has already studied the first line of the two algorithms: sketching the sparse vectors. In this section, we turn to the clustering procedure and empirically evaluate two alternatives: Standard and spherical KMeans. Note that, the clustering choice is the last piece required to complete the two algorithms and apply IVF-style search to sparse vectors. Standard KMeans is an iterative protocol that partitions the input data into a predefined number of clusters, ð
2309.09013#55
2309.09013#57
2309.09013
[ "2104.05740" ]
2309.09013#57
Bridging Dense and Sparse Maximum Inner Product Search
¾. It first samples ð ¾ arbitrary points, called â centroids,â from the data distribution at randomâ though there are other initialization protocols available, such as KMeans++ [5]. It then repeats until convergence two steps: It assigns each data point to the nearest centroid by their Euclidean distance to form partitions in the first step; and, in the second step, recomputes the centroids to be the mean of the mass of all data points assigned to each partition. While this Expectation-Maximization procedure may fall into local optima, it generally produces partitions that approximate Voronoi regions in a dataset. # Bridging Dense and Sparse Maximum Inner Product Search Spherical KMeans works similarly, with the notable exception that at the end of each iteration, it normalizes the centroids so that they are projected onto the unit sphere. This form of clustering has been used in the past for a topical analysis of text documents [21] among other applications. Both of these clustering algorithms are popular choices in the IVF-based approximate nearest neighbor search as evidenced by their integration into commonly used software packages such as FAISS [32]. As such, we plug the two methods into Algorithms 1 and 2 and apply them to our datasets. Our objective is to understand the differences between the two clustering choices in terms of their role in the overall retrieval quality as well as their sensitivity to the choice of sketching algorithm. 5.1 Empirical Comparison We begin by emphasizing that, in this particular section, we do not pay attention to speed and only report accuracy as a function of the total number of documents examined, â , in Algorithm 2. Additionally, we use an exact, exhaustive search algorithm as R over the original vectors to find the final top-ð candidates once the â -subset of a dataset has been identified. Before we state our findings, a note on our choice of â the number of documents examinedâ (â ) versus the more familiar notion of â the number of clusters searchedâ (known commonly as nProbe): The standard KMeans algorithm is highly sensitive to vector norms. That is natural as the algorithm cares solely about the Euclidean distance between points within a partition. When it operates on a collection of vectors with varying norms, then, it is intuitive that it tends to isolate high-normed points in their own, small partitions, while lumping together the low-normed vectors into massive clusters.
2309.09013#56
2309.09013#58
2309.09013
[ "2104.05740" ]
2309.09013#58
Bridging Dense and Sparse Maximum Inner Product Search
As a result of this phenomenon, partitions produced by standard KMeans are often imbalanced. Probing a fixed number of partitions at search time puts KMeans at an unfair disadvantage compared to its spherical variant. By choosing to work with â rather than fixating on the number of top clusters we remove that variable from the equation. Figure 4 summarizes our results for the Splade-generated vectors. We plot one figure per dataset, where each figure depicts the relationship between top-10 accuracy and â (expressed as percentage of the total number of documents). When applying Algorithm 1 to the datasets, we set the sketch size to 1024 as per findings of Section 4. Additionally, we fix the number of partitions ð to 4â ï¸
2309.09013#57
2309.09013#59
2309.09013
[ "2104.05740" ]
2309.09013#59
Bridging Dense and Sparse Maximum Inner Product Search
|X| where |X| is the number of documents in a dataset X. Plots for Efficient Splade are shown separately in Figure 5. One of the most striking observations is that spherical KMeans appears to be a better choice universally on the vector datasets we examine in this work. By partitioning the data with spherical KMeans in Algorithm 1 and examining at most 10% of the collection, we often reach a top-10 accuracy well above 0.8 and often 0.9. This is in contrast to the performance of standard KMeans which often lags behind. We are also surprised by how little the choice of JL transform versus Weak Sinnamon appears to matter, in the high-accuracy regime, for the purposes of partitioning with spherical KMeans and retrieval over the resulting partitions. When the clustering method is the standard KMeans, on the other hand, the difference between the two sketching algorithms is sometimes more noticeable. Additionally, and perhaps unsurprisingly, the difference between the two sketching methods is more pronounced in experiments on the Efficient Splade vector datasets. 6 CLUSTERING AS DYNAMIC PRUNING FOR THE INVERTED INDEX Throughout the previous sections, we simply assumed that once Algorithm 2 has identified the top partitions and accumulated the â -subset of documents to examine, the task of actually finding the
2309.09013#58
2309.09013#60
2309.09013
[ "2104.05740" ]
2309.09013#60
Bridging Dense and Sparse Maximum Inner Product Search
111:17 111:18 111:18 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty 0.95; 0.95; 0.9: 0.90 0.90. 0.85; 0.85; 0.7: il = 0.80, J z 0.6 0.75, 0.70) os 0.70 0.65] 4 --Spuerican - JL â +Sriericat - Weak Sinnamon #-Stanpanp - JL 0.65 0.60 4 â Staxparp - Weak Snvwastox ou 0.0 25 5.0, 75 10.0 0.0 25 5.0 75 10.0 0.0 25 5.0, 75 % Docs PRoBED % Docs PRoBED % Docs Propep # (a) MS Marco # (b) NQ # (c) Quora (d) HotpotQA (e) Fever (f) DBPedia
2309.09013#59
2309.09013#61
2309.09013
[ "2104.05740" ]
2309.09013#61
Bridging Dense and Sparse Maximum Inner Product Search
Fig. 4. Top-10 accuracy of Algorithm 2 for Splade vectors versus the number of documents examined (â )â expressed as percentage of the size of the collectionâ for different clustering algorithms (standard and spherical KMeans) and different sketching mechanisms (JL transform and Weak Sinnamon, with sketching size of 1024). Note that the vertical axis is not consistent across figures. top-ð vectors from that restricted subset would be delegated to a secondary MIPS algorithm, R, which we have thus far ignored. We now wish to revisit R.
2309.09013#60
2309.09013#62
2309.09013
[ "2104.05740" ]
2309.09013#62
Bridging Dense and Sparse Maximum Inner Product Search
10.0 # Bridging Dense and Sparse Maximum Inner Product Search 0.94 => -SPHERICAL - JL â +SpuericaL - Weak Sinnamon sa-Sranparp - JL 04 *STANDARD - WEAK SINNAMON oo 2.5 5.0 5 75 10.0 00 25 5.0 7 10.0 a0 25 50 75 10.0 % Docs Prosep % Docs Prose % Docs Prowep # (a) MS Marco # (b) NQ # (c) Quora (d) HotpotQA (e) Fever (f) DBPedia
2309.09013#61
2309.09013#63
2309.09013
[ "2104.05740" ]
2309.09013#63
Bridging Dense and Sparse Maximum Inner Product Search
Fig. 5. Top-10 accuracy of Algorithm 2 for Efficient Splade vs. the number of documents examined (â ). There are many ways one could design and implement R and apply it to the set of partitions PI on Line 10 of Algorithm 2. For example, R may be an exhaustive searchâ an option we used previously because we argued we were assessing retrieval quality alone and did not concern ourselves with efficiency. As another example, if partitions are stored on separate physical (or logical) retrieval nodes in a distributed system, each node could use an inverted index-based algorithm to find the
2309.09013#62
2309.09013#64
2309.09013
[ "2104.05740" ]
2309.09013#64
Bridging Dense and Sparse Maximum Inner Product Search
111:19 111:20 # Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty Algorithm 3: Constructing a partitioned inverted index Input: Collection of sparse vectors, X â Rð ; Clusters P obtained from Algorithm 1. Result: Inverted index, I; Skip list, S. 1: I â â ; 2: S â â ; 3: for Pð â P do 4: 5: 6: 7: â ² Initialize the inverted index â ² Initialize the skip list SortAscending(Pð ) ; for ð â Pð do â ² Sort partition by document identifier for ð ¡ â nz(ð ¥ ( ð ) ) do S [ð ¡].Append(ð , |I [ð ¡]|) if it is the first time a document from Pð is recorded in I [ð ¡] I [ð ¡].Append( ð , ð ¥ ( ð ) ) ; â ² Append document identifier and value to list 8: 9: 10: 11: end for 12: return I, S ð ¡ end for end for top-ð
2309.09013#63
2309.09013#65
2309.09013
[ "2104.05740" ]