id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.07107#12
Large Language Models for Information Retrieval: A Survey
Based on the â bag-of-wordsâ assumption, the vector space model [26] represents documents and queries as vectors in term-based space. Relevance estimation is then performed by assessing the lexical similarity between the query and document vectors. The efficiency of this model is further improved through the effective organization of text content using the inverted index. Moving towards more sophisticated approaches, statistical language models have been intro- duced to estimate the likelihood of term occurrences and incorporate context information, leading to more accurate and context-aware retrieval [27, 54]. In recent years, the neural IR [30, 55, 56] paradigm has gained considerable attention in the research community. By harnessing the powerful representation capabilities of neural networks, this paradigm can capture semantic relationships between queries and documents, thereby significantly enhancing re- trieval performance. Researchers have identified several challenges with im- plications for the performance and effectiveness of IR sys- tems, such as query ambiguity and retrieval efficiency.
2308.07107#11
2308.07107#13
2308.07107
[ "2305.03195" ]
2308.07107#13
Large Language Models for Information Retrieval: A Survey
In 6. https://github.com/RUC-NLPIR/LLM4IR-Survey light of these challenges, researchers have directed their at- tention toward crucial modules within the retrieval process, aiming to address specific issues and effectuate correspond- ing enhancements. The pivotal role of these modules in ameliorating the IR pipeline and elevating system perfor- mance cannot be overstated. In this survey, we focus on the following four modules, which have been greatly enhanced by LLMs. Query Rewriter is an essential IR module that seeks to improve the precision and expressiveness of user queries. Positioned at the early stage of the IR pipeline, this module assumes the crucial role of refining or modifying the initial query to align more accurately with the userâ s informa- tion requirements. As an integral part of query rewriting, query expansion techniques, with pseudo relevance feed- back being a prominent example, represent the mainstream approach to achieving query expression refinement. In ad- dition to its utility in improving search effectiveness across general scenarios, the query rewriter finds application in diverse specialized retrieval contexts, such as personalized search and conversational search, thus further demonstrat- ing its significance. Retriever, as discussed here, is typically employed in the early stages of IR for document recall. The evolution of retrieval technologies reflects a constant pursuit of more effective and efficient methods to address the challenges posed by ever-growing text collections. In numerous ex- periments on IR systems over the years, the classical â bag- of-wordsâ
2308.07107#12
2308.07107#14
2308.07107
[ "2305.03195" ]
2308.07107#14
Large Language Models for Information Retrieval: A Survey
model BM25 [29] has demonstrated its robust performance and high efficiency. In the wake of the neural IR paradigmâ s ascendancy, prevalent approaches have pri- marily revolved around projecting queries and documents into high-dimensional vector spaces, and subsequently com- puting their relevance scores through inner product cal- culations. This paradigmatic shift enables a more efficient understanding of query-document relationships, leveraging the power of vector representations to capture semantic similarities. Reranker, as another crucial module in the retrieval pipeline, primarily focuses on fine-grained reordering of documents within the retrieved document set. Different from the retriever, which emphasizes the balance of ef- ficiency and effectiveness, the reranker module places a greater emphasis on the quality of document ranking. In pursuit of enhancing the search result quality, researchers delve into more complex matching methods than the tradi- tional vector inner product, thereby furnishing richer match- ing signals to the reranker. Moreover, the reranker facilitates the adoption of specialized ranking strategies tailored to meet distinct user requirements, such as personalized and diversified search results. By integrating domain-specific objectives, the reranker module can deliver tailored and purposeful search results, enhancing the overall user expe- rience. Reader has evolved as a crucial module with the rapid development of LLM technologies. Its ability to comprehend real-time user intent and generate dynamic responses based on the retrieved text has revolutionized the presentation of IR results.
2308.07107#13
2308.07107#15
2308.07107
[ "2305.03195" ]
2308.07107#15
Large Language Models for Information Retrieval: A Survey
In comparison to presenting a list of candidate 3 documents, the reader module organizes answer texts more intuitively, simulating the natural way humans access infor- mation. To enhance the credibility of generated responses, the integration of references into generated responses has been an effective technique of the reader module. Furthermore, researchers explore unifying the above modules to develop a novel LLM-driven search model known as the Search Agent. The search agent is distin- guished by its simulation of an automated search and result understanding process, which furnishes users with accurate and readily comprehensible answers. WebGPT [24] serves as a pioneering work in this category, which models the search process as a sequence of actions of an LLM-based agent within a search engine environment, autonomously accomplishing the whole search pipeline. By integrating the existing search stack, search agents have the potential to become a new paradigm in future IR. # 2.2 Large Language Models Language models (LMs) are designed to calculate the gen- erative likelihood of word sequences by taking into ac- count the contextual information from preceding words, thereby predicting the probability of subsequent words. Consequently, by employing certain word selection strate- gies (such as greedy decoding or random sampling), LMs can proficiently generate natural language texts. Although the primary objective of LMs lies in text generation, recent studies [57] have revealed that a wide array of natural lan- guage processing problems can be effectively reformulated into a text-to-text format, thus rendering them amenable to resolution through text generation. This has led to LMs becoming the de facto solution for the majority of text-related problems. The evolution of LMs can be categorized into four pri- mary stages, as discussed in prior literature [52]. Initially, LMs were rooted in statistical learning techniques and were termed statistical language models. These models tackled the issue of word prediction by employing the Markov assumption to predict the subsequent word based on preceding words. Thereafter, neural networks, particu- larly recurrent neural networks (RNNs), were introduced to calculate the likelihood of text sequences and establish neural language models. These advancements made it feasible to utilize LMs for representation learning beyond mere word sequence modeling. ELMo [58] first proposed to learn contextualized word representations through pre- training a bidirectional LSTM (biLSTM) network on large- scale corpora, followed by fine-tuning on specific down- stream tasks.
2308.07107#14
2308.07107#16
2308.07107
[ "2305.03195" ]
2308.07107#16
Large Language Models for Information Retrieval: A Survey
Similarly, BERT [59] proposed to pre-train a Transformer [60] encoder with a specially designed Masked Language Modeling (MLM) task and Next Sentence Predic- tion (NSP) task on large corpora. These studies initiated a new era of pre-trained language models (PLMs), with the â pre-training then fine-tuningâ paradigm emerging as the prevailing learning approach. Along this line, numerous generative PLMs (e.g., GPT-2 [33], BART [61], and T5 [57]) have been developed for text generation problems including summarization, machine translation, and dialogue gener- ation. Recently, researchers have observed that increasing the scale of PLMs (e.g., model size or data amount) can ae Decoder-Only L 6 GPT 2019 OoL_eaT ) 6 = GS GPT-2 XLNet â
2308.07107#15
2308.07107#17
2308.07107
[ "2305.03195" ]
2308.07107#17
Large Language Models for Information Retrieval: A Survey
¬ â â GL xner_) 2020 G (mts) ©) 6 (errs ( Unitmv2 2021 am) 9 aries) ¢ aa) i GLM \o) GPT-J o ERNIE Switch © switch) © Conner) @ (codex) 2022 G [InstructGPT ]} (J { GPT-Neox | @{ BLOOM } G(_ ChatePT }OOQ{ ort | Gl Minerva } © Chinchilla }OO{ lamDA ) G{ PalM 2023 Oo(_tiawa) @ (_orra_) (ard) i (Cawde + Fig. 2. The evolution of LLMs (encoder-decoder and decoder-only structures). consistently improve their performance on downstream tasks (a phenomenon commonly referred to as the scaling law [62, 63]). Moreover, large-sized PLMs exhibit promis- ing abilities (termed emergent abilities [42]) in addressing complex tasks, which are not evident in their smaller coun- terparts. Therefore, the research community refers to these large-sized PLMs as large language models (LLMs). As shown in Figure 2, existing LLMs can be catego- rized into two groups based on their architectures: encoder- decoder [57, 61, 64â 69] and decoder-only [33â 35, 70â 80] models.
2308.07107#16
2308.07107#18
2308.07107
[ "2305.03195" ]
2308.07107#18
Large Language Models for Information Retrieval: A Survey
The encoder-decoder models incorporate an en- coder component to transform the input text into vectors, which are then employed for producing output texts. For example, T5 [57] is an encoder-decoder model that converts each natural language processing problem into a text-to- text form and resolves it as a text generation problem. In contrast, decoder-only models, typified by GPT, rely on the Transformer decoder architecture. It uses a self-attention mechanism with a diagonal attention mask to generate a sequence of words from left to right. Building upon the success of GPT-3 [34], which is the first model to encompass over 100B parameters, several noteworthy models have been inspired, including GPT-J, BLOOM [78], OPT [75], Chinchilla [81], and LLaMA [35]. These models follow the similar Transformer decoder structure as GPT-3 and are trained on various combinations of datasets. Owing to their vast number of parameters, fine-tuning LLMs for specific tasks, such as IR, is often deemed imprac- tical. Consequently, two prevailing methods for applying LLMs have been established: in-context learning (ICL) and parameter-efficient fine-tuning. ICL is one of the emergent abilities of LLMs [34] empowering them to comprehend and furnish answers based on the provided input context, rather than relying merely on their pre-training knowledge. This method requires only the formulation of the task description and demonstrations in natural language, which are then fed as input to the LLM.
2308.07107#17
2308.07107#19
2308.07107
[ "2305.03195" ]
2308.07107#19
Large Language Models for Information Retrieval: A Survey
Notably, parameter tuning is not 4 Instruction } | Demonstrations { Input (context) Write a passage to answer the given query: Query: what state is this zip code 85282 NX | Passage: Welcome to TEMPE, AZ 85282. 85282 is a rural zip code in Tempe, Arizona. The population is primarily white... Query: when was pokemon green released? Passage: Large Language Models Pokemon Green was released in Japan on February 27th, 1996. It was the first in the Pokemon series of games and served as the basis for Pokemon Red and Blue, which were released in the US in 1998. The original Pokemon Green remains a beloved classic among fans of the series ¥ IR systems } Generated passage
2308.07107#18
2308.07107#20
2308.07107
[ "2305.03195" ]
2308.07107#20
Large Language Models for Information Retrieval: A Survey
Fig. 3. An example of LLM-based query rewriting for ad-hoc search. The example is cited from the Query2Doc paper [86]. LLMs are used to generate a passage to supplement the original query, where N = 0 and N > 0 correspond to zero-shot and few-shot scenarios. required for ICL. Additionally, the efficacy of ICL can be fur- ther augmented through the adoption of chain-of-thought prompting, involving multiple demonstrations (describe the chain of thought examples) to guide the modelâ s reasoning process. ICL is the most commonly used method for apply- ing LLMs to IR.
2308.07107#19
2308.07107#21
2308.07107
[ "2305.03195" ]
2308.07107#21
Large Language Models for Information Retrieval: A Survey
Parameter-efficient fine-tuning [82â 84] aims to reduce the number of trainable parameters while main- taining satisfactory performance. LoRA [82], for example, has been widely applied to open-source LLMs (e.g., LLaMA and BLOOM) for this purpose. Recently, QLoRA [85] has been proposed to further reduce memory usage by lever- aging a frozen 4-bit quantized LLM for gradient compu- tation. Despite the exploration of parameter-efficient fine- tuning for various NLP tasks, its implementation in IR tasks remains relatively limited, representing a potential avenue for future research. 3 QUERY REWRITER Query rewriting in modern IR systems is essential for improving search query effectiveness and accuracy.
2308.07107#20
2308.07107#22
2308.07107
[ "2305.03195" ]
2308.07107#22
Large Language Models for Information Retrieval: A Survey
It re- formulates usersâ original queries to better match search results, alleviating issues like vague queries or vocabulary mismatches between the query and target documents. This task goes beyond mere synonym replacement, requiring an understanding of user intent and query context, particularly in complex searches like conversational queries. Effective query rewriting enhances search engine performance. Traditional methods for query rewriting improve re- trieval performance by expanding the initial query with in- formation from highly-ranked relevant documents. Mainly- used methods include relevance feedback [87â 92], word- embedding based methods [93, 94] etc. However, the limited ability of semantic understanding and comprehension of user search intent limits their performance in capturing the full scope of user intent. Recent advancements in LLMs present promising oppor- tunities to boost query rewriting capabilities.
2308.07107#21
2308.07107#23
2308.07107
[ "2305.03195" ]
2308.07107#23
Large Language Models for Information Retrieval: A Survey
On one hand, Reformulate the current question into a de-contextualized rewrite under the multi-turn information-seeking dialog context and generate a correct response. Turn 1: Question: What should | consider when buying a phone? Rewrite: This is the first turn. So, the question should be rewritten as: What should | consider when buying a phone? Response: The design of the phone and the overall ... Turn 2: Question: Cool. Which one would you recommend? Rewrite: Based on Turn 1, you are inquiring about what should be considered when buying a phone. So, the question should be rewritten as:
2308.07107#22
2308.07107#24
2308.07107
[ "2305.03195" ]
2308.07107#24
Large Language Models for Information Retrieval: A Survey
Cool. Which smartphone would you recommend for me? Response: Just because a phone has everything ... Turn 1: Question: What was the basis of the Watergate scandal? Rewrite: Response: Turn 2: Turn t: Question: So, what happened to Nixon? Rewrite: Large Language Models ) v Based on all previous turns, Nixon was badly involved in the Watergate scandal. So, the question should be rewritten as: So, what happened to Nixon after the events of the Watergate scandal? Response:
2308.07107#23
2308.07107#25
2308.07107
[ "2305.03195" ]
2308.07107#25
Large Language Models for Information Retrieval: A Survey
With the mounting evidence and loss... ¥ IR systems ] Generated query Fig. 4. An example of LLM-based query rewriting for con- versational search. The example is cited from LLMCS [95]. The LLM is used to generate a query based on the demon- strations and previous search context. Additional responses are required to be generated for improving the query un- derstanding. N = 0 and N > 0 correspond to zero-shot and few-shot scenarios. given the context and subtleties of a query, LLMs can pro- vide more accurate and contextually relevant rewrites. On the other hand, LLMs can leverage their extensive knowl- edge to generate synonyms and related concepts, enhancing queries to cover a broader range of relevant documents, thereby effectively addressing the vocabulary mismatch problem. In the following sections, we will introduce the recent works that employ LLMs in query rewriting. # 3.1 Rewriting Scenario Query rewriting typically serves two scenarios: ad-hoc re- trieval, which mainly addresses vocabulary mismatches between queries and candidate documents, and conver- sational search, which refines queries based on evolving conversations. The upcoming section will delve into the role of query rewriting in these two domains and explore how LLMs enhance this process. # 3.1.1 Ad-hoc Retrieval In ad-hoc retrieval, queries are often short and ambiguous. In such scenarios, the main objectives of query rewriting include adding synonyms or related terms to address vo- cabulary mismatches and clarifying ambiguous queries to more accurately align with user intent. From this perspec- tive, LLMs have inherent advantages in query rewriting.
2308.07107#24
2308.07107#26
2308.07107
[ "2305.03195" ]
2308.07107#26
Large Language Models for Information Retrieval: A Survey
5 Primarily, LLMs have a deep understanding of language semantics, allowing them to capture the meaning of queries more effectively. Besides, LLMs can leverage their extensive training on diverse datasets to generate contextually rele- vant synonyms and expand queries, ensuring broader and more precise search result coverage. Additionally, studies have shown that LLMsâ integration of external factual cor- pora [96â 99] and thoughtful model design [100] further en- hance their accuracy in generating effective query rewrites, especially for specific tasks. Currently, there are many studies leveraging LLMs to rewrite queries in adhoc retrieval. We introduce the typ- ical method Query2Doc [86] as an example. As shown in Figure 3, Query2Doc prompts the LLMs to generate a relevant passage according to the original query (â
2308.07107#25
2308.07107#27
2308.07107
[ "2305.03195" ]
2308.07107#27
Large Language Models for Information Retrieval: A Survey
when was pokemon green released?â ). Subsequently, the original query is expanded by incorporating the generated passage. The retriever module uses this new query to retrieve a list of relevant documents. Notably, the generated passage contains additional detailed information, such as â Pokemon Green was released in Japan on February 27thâ , which effectively mitigates the â vocabulary mismatchâ issue to some extent. In addition to addressing the â vocabulary mismatchâ problem [96â 99, 101, 102], other works utilize LLMs for dif- ferent challenges in ad-hoc retrieval. For instance, Prompt- Case [103] leverages LLMs in legal case retrieval to simplify complex queries into more searchable forms. This involves using LLMs to identify legal facts and issues, followed by a prompt-based encoding scheme for effective language model encoding. # 3.1.2 Conversational Search Query rewrites in conversational search play a pivotal role in enhancing the search experience. Unlike traditional queries in ad-hoc retrieval, conversational search involves a dialogue-like interaction, where the context and user intent evolve with each interaction. In conversational search, query rewriting involves understanding the entire conversationâ s context, clarifying any ambiguities, and personalizing re- sponses based on user history. The process includes dy- namic query expansion and refinement based on dialogue information. This makes conversational query rewriting a sophisticated task that goes beyond traditional search, fo- cusing on natural language understanding and user-centric interaction. In the era of LLMs, leveraging LLMs in conversational search tasks offers several advantages. First, LLMs pos- sess strong contextual understanding capabilities, enabling them to better comprehend usersâ search intent within the context of multi-turn conversations between users and the system. Second, LLMs exhibit powerful generation abilities, allowing them to simulate dialogues between users and the system, thereby facilitating more robust search intent modeling. The LLMCS framework [95] is a pioneering approach that employs LLMs to effectively extract and understand user search intent within conversational contexts. As illus- trated in their work, LLMCS uses LLMs to produce both query rewrites and extensive hypothetical system responses from various perspectives.
2308.07107#26
2308.07107#28
2308.07107
[ "2305.03195" ]
2308.07107#28
Large Language Models for Information Retrieval: A Survey
These outputs are combined into a comprehensive representation that effectively cap- tures the userâ s full search intent. The experimental results show that including detailed hypothetical responses with concise query rewrites markedly improves search perfor- mance by adding more plausible search intent. Ye et al. [104] claims that human query rewrite may lack sufficient information for optimal retrieval performance. It defines four essential properties for well-formed LLM-generated query rewrites. Results show that their method informative query rewrites can yield substantially improved retrieval performance compared to human rewrites. Besides, LLMs can be used as a data expansion tool in conversational dense retrieval. Attributed to the high cost of producing hand-written dialogues, data scarcity presents a significant challenge in the domain of conversational search. To address this problem, CONVERSER [105] employs LLMs to generate synthetic passage-dialogue pairs through few- shot demonstrations. Furthermore, it efficiently trains a dense retriever using a minimal dataset of six in-domain dialogues, thus mitigating the issue of data sparsity.
2308.07107#27
2308.07107#29
2308.07107
[ "2305.03195" ]
2308.07107#29
Large Language Models for Information Retrieval: A Survey
# 3.2 Rewriting Knowledge Query rewriting typically necessitates additional corpora for refining initial queries. Considering that LLMs incorporate world knowledge in their parameters, they are naturally capable of rewriting queries. We refer to these methods, which rely exclusively on the intrinsic knowledge of LLMs, as LLM-only methods. While LLMs encompass a broad spectrum of knowledge, they may be inadequate in spe- cialized areas. Furthermore, LLMs can introduce concept drift, leading to noisy relevance signals. To address this issue, some methods incorporate domain-specific corpora to provide more detailed and relevant information in query rewriting. We refer to methods enhanced by domain-specific corpora to boost LLM performance as corpus-enhanced LLM-based methods. In this section, we will introduce these two methods in detail. # 3.2.1 LLM-only methods LLMs are capable of storing knowledge within their pa- rameters, making it a natural choice to capitalize on this knowledge for the purpose of query rewriting. As a pio- neering work in LLM-based query rewriting, HyDE [101] generates a hypothetical document by LLMs according to the given query and then uses a dense retriever to retrieve relevant documents from the corpus that are relevant to the generated document. Query2doc [86] generates pseudo doc- uments via prompting LLMs with few-shot demonstrations, and then expands the query with the generated pseudo document. Furthermore, the influence of different prompt- ing methods and various model sizes on query rewriting has also been investigated [102]. To better accommodate the frozen retriever and the LLM-based reader, a small language model is employed as the rewriter that is trained using reinforcement learning techniques with the rewards provided by the LLM-based reader [100].
2308.07107#28
2308.07107#30
2308.07107
[ "2305.03195" ]
2308.07107#30
Large Language Models for Information Retrieval: A Survey
GFF [106] presents a â Generate, Filter, and Fuseâ method for query expansion. It employs an LLM to create a set of related keywords via a reasoning chain. Then, a self-consistency filter is used to identify the most important keywords, which are 6 concatenated with the original queries for the downstream reranking task. It is worth noting that though the designs of these meth- ods are different, all of them rely on the world knowledge stored in LLMs without additional corpora. # 3.2.2 Corpus-enhanced LLM-based methods Although LLMs exhibit remarkable capabilities, the lack of domain-specific knowledge may lead to the generation of hallucinatory or irrelevant queries. To address this issue, recent studies [96â 99] have proposed a hybrid approach that enhances LLM-based query rewriting methods with an external document corpus. Why incorporate a document corpus? The integration of a document corpus offers several notable advantages. Firstly, it boosts relevance by using relevant documents to refine query generation, reducing irrelevant content and improv- ing contextually appropriate outputs. Second, enhancing LLMs with up-to-date information and specialized knowl- edge in specific fields enables them to effectively deal with queries that are both current and specific to certain domains. How to incorporate a document corpus? Thanks to the flexibility of LLMs, various paradigms have been proposed to incorporate a document corpus into LLM-based query rewriting, which can be summarized as follows.
2308.07107#29
2308.07107#31
2308.07107
[ "2305.03195" ]
2308.07107#31
Large Language Models for Information Retrieval: A Survey
â ¢ Late fusion of LLM-based re-writing and pseudo relevance feedback (PRF) retrieval results. Traditional PRF methods leverage relevant documents retrieved from a document corpus to rewrite queries, which restricts the query to the information contained in the target corpus. On the con- trary, LLM-based rewriting methods provide external con- text not present in the corpus, which is more diverse. Both approaches have the potential to independently enhance retrieval performance. Therefore, a straightforward strategy for combining them is using a weighted fusion method for retrieval results [99].
2308.07107#30
2308.07107#32
2308.07107
[ "2305.03195" ]
2308.07107#32
Large Language Models for Information Retrieval: A Survey
â ¢ Combining retrieved relevant documents in the prompts of LLMs. In the era of LLMs, incorporating instructions within the prompts is the most flexible method for achieving specific functionalities. QUILL [97] and CAR [107] illus- trate how retrieval augmentation of queries can provide LLMs with context that significantly enhances query un- derstanding. LameR [108] takes this further by using LLM expansion to improve the simple BM25 retriever, intro- ducing a retrieve-rewrite-retrieve framework. Experimental results reveal that even basic term-based retrievers can achieve comparable performance when paired with LLM- based rewriters. Additionally, InteR [98] proposes a multi- turn interaction framework between search engines and LLMs. This enables search engines to expand queries using LLM-generated insights, while LLMs refine prompts using relevant documents sourced from the search engines.
2308.07107#31
2308.07107#33
2308.07107
[ "2305.03195" ]
2308.07107#33
Large Language Models for Information Retrieval: A Survey
â ¢ Enhancing factuality of generative relevance feedback (GRF) by pseudo relevance feedback (PRF). Although generative doc- uments are often relevant and diverse, they exhibit halluci- natory characteristics. In contrast, traditional documents are generally regarded as reliable sources of factual information. Motivated by this observation, GRM [96] proposes a novel technique known as relevance-aware sample estimation (RASE). RASE leverages relevant documents retrieved from TABLE 1. Partial Examples of different prompting methods in query rewriting. Methods Prompts Zero-shot HyDE [101] LameR [108] Please write a passage to answer the question. Question: {#Question} Passage: Give a question {#Question} and its possible an- swering passages: A. {#Passage 1} B. {#Passage 2} C. {#Passage 3} ... Please write a correct answering passage. Few-shot Query2Doc [101]Write a passage that answers the given query: Query: {#Query 1} Passage: {#Passage 1} ... Query: {#Query} Passage: Chain-of-Thought CoT [102] Answer the following query based on the context: Context: {#PRF doc 1} {#PRF doc 2} {#PRF doc 3} Query: {#Query} Give the rationale before answering
2308.07107#32
2308.07107#34
2308.07107
[ "2305.03195" ]
2308.07107#34
Large Language Models for Information Retrieval: A Survey
the collection to assign weights to generated documents. In this way, GRM ensures that relevance feedback is not only diverse but also maintains a high degree of factuality. # 3.3 Rewriting Approaches There are three main approaches used for leveraging LLMs in query rewriting: prompting methods, fine-tuning, and knowl- edge distillation. Prompting methods involve using specific prompts to direct LLM output, providing flexibility and interpretability. Fine-tuning adjusts pre-trained LLMs on specific datasets or tasks to improve domain-specific perfor- mance, mitigating the general nature of LLM world knowl- edge. Knowledge distillation, on the other hand, transfers LLM knowledge to lightweight models, simplifying the complexity associated with retrieval augmentation.
2308.07107#33
2308.07107#35
2308.07107
[ "2305.03195" ]
2308.07107#35
Large Language Models for Information Retrieval: A Survey
In the following section, we will introduce these three methods in detail. # 3.3.1 Prompting Prompting in LLMs refers to the technique of providing a specific instruction or context to guide the modelâ s genera- tion of text. The prompt serves as a conditioning signal and influences the language generation process of the model. Existing prompting strategies can be roughly categorized into three groups: zero-shot prompting, few-shot prompt- ing, and chain-of-thought (CoT) prompting [45]. Zero-shot prompting. Zero-shot prompting involves in- structing the model to generate texts on a specific topic without any prior exposure to training examples in that domain or topic. The model relies on its pre-existing knowl- edge and language understanding to generate coherent and contextually relevant expanded terms for original queries. Experiments show that zero-shot prompting is a simple yet effective method for query rewriting [98, 99, 102, 108â
2308.07107#34
2308.07107#36
2308.07107
[ "2305.03195" ]
2308.07107#36
Large Language Models for Information Retrieval: A Survey
110]. â ¢ Few-shot prompting. Few-shot prompting, also known as in-context learning, involves providing the model with a limited set of examples or demonstrations related to the 7 desired task or domain [86, 102, 109, 110]. These examples serve as a form of explicit instruction, allowing the model to adapt its language generation to the specific task or domain at hand. Query2Doc [86] prompts LLMs to write a document that answers the query with some demo query- document pairs provided by the ranking dataset, such as MSMARCO [111] and NQ [112]. This work experiments with a single prompt. To further study the impact of different prompt designing, recent works [102] have ex- plored eight different prompts, such as prompting LLMs to generate query expansion terms instead of entire pseudo documents and CoT prompting. There are some illustrative prompts in Table 1. This work conducts more experiments than Query2Doc, but the results show that the proposed prompt is less effective than Query2Doc.
2308.07107#35
2308.07107#37
2308.07107
[ "2305.03195" ]
2308.07107#37
Large Language Models for Information Retrieval: A Survey
â ¢ Chain-of-thought prompting. CoT prompting [45] is a strategy that involves iterative prompting, where the model is provided with a sequence of instructions or partial out- puts [102, 109]. In conversational search, the process of query re-writing is multi-turn, which means queries should be refined step-by-step with the interaction between search engines and users. This process is naturally coincided with CoT process. As shown in 4, users can conduct the CoT process through adding some instructions during each turn, such as â Based on all previous turns, xxxâ .
2308.07107#36
2308.07107#38
2308.07107
[ "2305.03195" ]
2308.07107#38
Large Language Models for Information Retrieval: A Survey
While in ad-hoc search, there is only one-round in query re-writing, so CoT could only be accomplished in a simple and coarse way. For example, as shown in Table 1, researchers add â Give the rationale before answeringâ in the instructions to prompt LLMs think deeply [102]. # 3.3.2 Fine-tuning Fine-tuning is an effective approach for adapting LLMs to specific domains. This process usually starts with a pre- trained language model, like GPT-3, which is then further trained on a dataset tailored to the target domain. This domain-specific training enables the LLM to learn unique patterns, terminology, and context relevant to the domain, which is able to improve its capacity to produce high-quality query rewrites. BEQUE [113] leverages LLMs for rewriting queries in e-commerce product searches. It designs three Supervised Fine-Tuning (SFT) tasks: quality classification of e-commerce query rewrites, product title prediction, and CoT query rewriting. To our knowledge, it is the first model to di- rectly fine-tune LLMs, including ChatGLM [68, 114], Chat- GLM2.0 [68, 114], Baichuan [115], and Qwen [116], specif- ically for the query rewriting task. After the SFT stage, BEQUE uses an offline system to gather feedback on the rewrites and further aligns the rewriters with e-commerce search objectives through an object alignment stage. Online A/B testing demonstrates the effectiveness of the method. # 3.3.3 Knowledge Distillation Although LLM-based methods have demonstrated signif- icant improvements in query rewriting tasks, their practi- cal implementation for online deployment is hindered by the substantial latency caused by the computational re- quirements of LLMs. To address this challenge, knowledge distillation has emerged as a prominent technique in the
2308.07107#37
2308.07107#39
2308.07107
[ "2305.03195" ]
2308.07107#39
Large Language Models for Information Retrieval: A Survey
TABLE 2. Summary of existing LLM-enhanced query rewrit- ing methods. â Docsâ and â KDâ stand for document corpus and knowledge distillation, respectively. Methods Target Data Generation Ad-hoc HyDE [97] Ad-hoc Jagerman et al. [102] Ad-hoc Query2Doc [86] Ad-hoc Ma et al. [100] Ad-hoc PromptCase [103] Ad-hoc GRF+PRF [99] Ad-hoc GRM [96] Ad-hoc InteR [98] Ad-hoc LameR [108] Ad-hoc CAR [107] Ad-hoc QUILL [97] LLMCS [95] Conversational CONVERSER [105] Conversational Conversational Ye et al. [104] Prompting Prompting Prompting Finetuning Prompting Prompting Prompting Prompting Prompting Prompting LLMs LLMs LLMs Prompting Prompting Prompting
2308.07107#38
2308.07107#40
2308.07107
[ "2305.03195" ]
2308.07107#40
Large Language Models for Information Retrieval: A Survey
industry. In the QUILL [97] framework, a two-stage distil- lation method is proposed. This approach entails utilizing a retrieval-augmented LLM as the professor model, a vanilla LLM as the teacher model, and a lightweight BERT model as the student model. The professor model is trained on two extensive datasets, namely Orcas-I [117] and EComm [97], which are specifically curated for query intent understand- ing. Subsequently, a two-stage distillation process is em- ployed to transfer knowledge from the professor model to the teacher model, followed by knowledge transfer from the teacher model to the student model. Empirical findings demonstrate that this knowledge distillation methodology surpasses the simple scaling up of model size from base to XXL, resulting in even more substantial improvements.
2308.07107#39
2308.07107#41
2308.07107
[ "2305.03195" ]
2308.07107#41
Large Language Models for Information Retrieval: A Survey
In a recently proposed â rewrite-retrieve-readâ framework [100], an LLM is first used to rewrite the queries by prompt- ing, followed by a retrieval-augmented reading process. To improve framework effectiveness, a trainable rewriter, implemented as a small language model, is incorporated to further adapt search queries to align with both the frozen retriever and the LLM readerâ s requirements. The rewriterâ s refinement involves a two-step training process. Initially, supervised warm-up training is conducted using pseudo data. Then, the retrieve-then-read pipeline is described as a reinforcement learning scenario, with the rewriterâ s training acting as a policy model to maximize pipeline performance rewards.
2308.07107#40
2308.07107#42
2308.07107
[ "2305.03195" ]
2308.07107#42
Large Language Models for Information Retrieval: A Survey
# 3.4 Limitations While LLMs offer promising capabilities for query rewrit- ing, they also meet several challenges. Here, we outline two main limitations of LLM-based query rewriters. # 3.4.1 Concept Drifts When using LLMs for query rewriting, they may introduce unrelated information, known as concept drift, due to their extensive knowledge base and tendency to produce detailed and redundant content. While this can enrich the query, it also risks generating irrelevant or off-target results. This phenomenon has been reported in several stud- ies [107, 113, 118] These studies highlight the need for a balanced approach in LLM-based query rewriting, ensuring
2308.07107#41
2308.07107#43
2308.07107
[ "2305.03195" ]
2308.07107#43
Large Language Models for Information Retrieval: A Survey
8 that the essence and focus of the original query are main- tained while leveraging the LLMâ s ability to enhance and clarify the query. This balance is crucial for effective search and IR applications. 3.4.2 Correlation between Retrieval Performance and Ex- pansion Effects Recently, a comprehensive study [119] conduct experiments on various expansion techniques and downstream ranking models, which reveals a notable negative correlation be- tween retriever performance and the benefits of expansion. Specifically, while expansion tends to enhance the scores of weaker models, it generally hurts stronger models. This ob- servation suggests a strategic approach: employ expansions with weaker models or in scenarios where the target dataset substantially differs in format from the training corpus. In other cases, it is advisable to avoid expansions to maintain clarity of the relevance signal.
2308.07107#42
2308.07107#44
2308.07107
[ "2305.03195" ]
2308.07107#44
Large Language Models for Information Retrieval: A Survey
# 4 RETRIEVER In an IR system, the retriever serves as the first-pass docu- ment filter to collect broadly relevant documents for user queries. Given the enormous amounts of documents in an IR system, the retrieverâ s efficiency in locating relevant documents is essential for maintaining search engine per- formance. Meanwhile, a high recall is also important for the retriever, as the retrieved documents are then fed into the ranker to generate final results for users, which determines the ranking quality of search engines. In recent years, retrieval models have shifted from rely- ing on statistic algorithms [29] to neural models [3, 31]. The latter approaches exhibit superior semantic capability and excel at understanding complicated user intent. The success of neural retrievers relies on two key factors: data and model. From the data perspective, a large amount of high- quality training data is essential. This enables retrievers to acquire comprehensive knowledge and accurate matching patterns. Furthermore, the intrinsic quality of search data, i.e., issued queries and document corpus, significantly influ- ences retrieval performance. From the model perspective, a strongly representational neural architecture allows retriev- ers to effectively store and apply knowledge obtained from the training data. Unfortunately, there are some long-term challenges that hinder the advancement of retrieval models. First, user queries are usually short and ambiguous, making it difficult to precisely understand the userâ s search intents for retriev- ers. Second, documents typically contain lengthy content and substantial noise, posing challenges in encoding long documents and extracting relevant information for retrieval models. Additionally, the collection of human-annotated relevance labels is time-consuming and costly. It restricts the retrieversâ knowledge boundaries and their ability to generalize across different application domains. Moreover, existing model architectures, primarily built on BERT [59], exhibit inherent limitations, thereby constraining the perfor- mance potential of retrievers. Recently, LLMs have exhibited extraordinary abilities in language understanding, text gen- eration, and reasoning. This has motivated researchers to use these abilities to tackle the aforementioned challenges and aid in developing superior retrieval models. Roughly, these studies can be categorized into two groups, i.e., (1) leveraging LLMs to generate search data, and (2) employing LLMs to enhance model architecture. # 4.1 Leveraging LLMs to Generate Search Data
2308.07107#43
2308.07107#45
2308.07107
[ "2305.03195" ]
2308.07107#45
Large Language Models for Information Retrieval: A Survey
In light of the quality and quantity of search data, there are two prevalent perspectives on how to improve retrieval per- formance via LLMs. The first perspective revolves around search data refinement methods, which concentrate on re- formulating input queries to precisely present user intents. The second perspective involves training data augmenta- tion methods, which leverage LLMsâ generation ability to enlarge the training data for dense retrieval models, partic- ularly in zero- or few-shot scenarios. # 4.1.1 Search Data Refinement Typically, input queries consist of short sentences or keyword-based phrases that may be ambiguous and contain multiple possible user intents. Accurately determining the specific user intent is essential in such cases. Moreover, documents usually contain redundant or noisy information, which poses a challenge for retrievers to extract relevance signals between queries and documents. Leveraging the strong text understanding and generation capabilities of LLMs offers a promising solution to these challenges. As yet, research efforts in this domain primarily concentrate on employing LLMs as query rewriters, aiming to refine input queries for more precise expressions of the userâ s search intent. Section 3 has provided a comprehensive overview of these studies, so this section refrains from further elabora- tion. In addition to query rewriting, an intriguing avenue for exploration involves using LLMs to enhance the effec- tiveness of retrieval by refining lengthy documents. This intriguing area remains open for further investigation and advancement.
2308.07107#44
2308.07107#46
2308.07107
[ "2305.03195" ]
2308.07107#46
Large Language Models for Information Retrieval: A Survey
# 4.1.2 Training Data Augmentation Due to the expensive economic and time costs of human- annotated labels, a common problem in training neural retrieval models is the lack of training data. Fortunately, the excellent capability of LLMs in text generation offers a potential solution. A key research focus lies in devising strategies to leverage LLMsâ capabilities to generate pseudo- relevant signals and augment the training dataset for the retrieval task. Why do we need data augmentation? Previous studies of neural retrieval models focused on supervised learning, namely training retrieval models using labeled data from specific domains. For example, MS MARCO [111] pro- vides a vast repository, containing a million passages, more than 200,000 documents, and 100,000 queries with human- annotated relevance labels, which has greatly facilitated the development of supervised retrieval models. However, this paradigm inherently constrains the retrieverâ s generaliza- tion ability for out-of-distribution data from other domains. The application spectrum of retrieval models varies from natural question-answering to biomedical IR, and it is ex- pensive to annotate relevance labels for data from different domains. As a result, there is an emerging need for zero-shot
2308.07107#45
2308.07107#47
2308.07107
[ "2305.03195" ]
2308.07107#47
Large Language Models for Information Retrieval: A Survey
9 Few-shot prompt Example 1: Document: ...If you are pregnant, limit caffeine to 200 milligrams each day. This is about the amount in 1% 8- ounce cups of coffee or one 12-ounce cup of coffee. Relevant Query: Is a little caffeine ok during pregnancy? Prompts & Document text Example N: Document: Passiflora herbertiana. A rare passion fruit native to Australia... Relevant Query: What fruit is native to Australia?
2308.07107#46
2308.07107#48
2308.07107
[ "2305.03195" ]
2308.07107#48
Large Language Models for Information Retrieval: A Survey
Example N + 1: Document: {#Document} Relevant Query: Zero-shot prompt Write a Question answered by the given passage. Passage: {#Passage} Query: OO ) Filtered Relevant Queries Augmented Training Corpus Framework of pseudo query generation Retriever | Retrieved Passages LLM-based Relevance Estimator | Pseudo Queries Question Soft Relevance Augmented Training Corpus Framework of relevance label generation Fig. 5. Two typical frameworks for LLM-based data augmentation in the retrieval task (right), along with their prompt examples (left). Note that the methods of relevance label generation do not treat questions as inputs but regard their generation probabilities conditioned on the retrieved passages as soft relevance labels.
2308.07107#47
2308.07107#49
2308.07107
[ "2305.03195" ]
2308.07107#49
Large Language Models for Information Retrieval: A Survey
TABLE 3. The comparison of existing data augmentation methods powered by LLMs for training retrieval models. Methods # Examples Generator Synthetic Data Filter Method LLMsâ tuning InPairs [120] Ma et al. [121] InPairs-v2 [122] PROMPTAGATOR [123] TQGen [124] UDAPDR [125] SPTAR [126] ART [127] 3 0-2 3 0-8 0 0-3 1-2 0 Curie Alpaca-LLaMA & tk-Instruct GPT-J FLAN T0 GPT3 & FLAN-T5-XXL LLaMA-7B & Vicuna-7B T5-XL & T5-XXL Relevant query Relevant query Relevant query Relevant query Relevant query Relevant query Relevant query Soft relevance labels Generation probability - Relevance score from fine-tuned monoT5-3B Round-trip filtering Generation probability Round-trip filtering BM25 filtering - Fixed Fixed Fixed Fixed Fixed Fixed Soft Prompt tuning Fixed and few-shot learning models to address this problem [128]. A common practice to improve the modelsâ effectiveness in a target domain without adequate label signals is through data augmentation. How to apply LLMs for data augmentation?
2308.07107#48
2308.07107#50
2308.07107
[ "2305.03195" ]
2308.07107#50
Large Language Models for Information Retrieval: A Survey
In the scenario of IR, it is easy to collect numerous documents. However, the challenging and costly task lies in gathering real user queries and labeling the relevant documents accordingly. Considering the strong text generation capability of LLMs, many researchers [120, 122] suggest using LLM-driven pro- cesses to create pseudo queries or relevance labels based on existing collections. These approaches facilitate the con- struction of relevant query-document pairs, enlarging the training data for retrieval models. According to the type of generated data, there are two mainstream approaches that complement the LLM-based data augmentation for retrieval models, i.e., pseudo query generation and relevance label generation.
2308.07107#49
2308.07107#51
2308.07107
[ "2305.03195" ]
2308.07107#51
Large Language Models for Information Retrieval: A Survey
Their frameworks are visualized in Figure 5. Next, we will give an overview of the related studies. to GPT-3, which subsequently generates possible relevant queries for the given document. By combining the same demonstration with various documents, it is easy to create a vast pool of synthetic training samples and support the fine-tuning of retrievers on specific target domains. Recent studies [121] have also leveraged open-sourced LLMs, such as Alpaca-LLaMA and tk-Instruct, to produce sufficient pseudo queries and applied curriculum learning to pre-train dense retrievers. To enhance the reliability of these synthetic samples, a fine-tuned model (e.g., a monoT5-3B model fine- tuned on MSMARCO [122]) is employed to filter the gener- ated queries. Only the top pairs with the highest estimated relevance scores are kept for training.
2308.07107#50
2308.07107#52
2308.07107
[ "2305.03195" ]
2308.07107#52
Large Language Models for Information Retrieval: A Survey
This â generating-then- filteringâ paradigm can be conducted iteratively in a round- trip filtering manner, i.e., by first fine-tuning a retriever on the generated samples and then filtering the generated sam- ples using this retriever. Repeating these EM-like steps until convergence can produce high-quality training sets [123]. Furthermore, by adjusting the prompt given to LLMs, they can generate queries of different types. This capability al- lows for a more accurate simulation of real queries with various patterns [124].
2308.07107#51
2308.07107#53
2308.07107
[ "2305.03195" ]
2308.07107#53
Large Language Models for Information Retrieval: A Survey
â ¢ Pseudo query generation. Given the abundance of docu- ments, a straightforward idea is to use LLMs for generating their corresponding pseudo queries. One such illustration is presented by inPairs [120], which leverages the in-context learning capability of GPT-3. This method employs a col- lection of query-document pairs as demonstrations. These pairs are combined with a document and presented as input In practice, it is costly to generate a substantial number of pseudo queries through LLMs. Balancing the generation costs and the quality of generated samples has become an urgent problem. To tackle this, UDAPDR [125] is proposed, which first produces a limited set of synthetic queries using
2308.07107#52
2308.07107#54
2308.07107
[ "2305.03195" ]
2308.07107#54
Large Language Models for Information Retrieval: A Survey
10 LLMs for the target domain. These high-quality examples are subsequently used as prompts for a smaller model to generate a large number of queries, thereby constructing the training set for that specific domain. It is worth noting that the aforementioned studies primarily rely on fixed LLMs with frozen parameters. Empirically, optimizing LLMsâ pa- rameters can significantly improve their performance on downstream tasks. Unfortunately, this pursuit is impeded by the prohibitively high demand for computational re- sources. To overcome this obstacle, SPTAR [126] introduces a soft prompt tuning technique that only optimizes the promptsâ
2308.07107#53
2308.07107#55
2308.07107
[ "2305.03195" ]
2308.07107#55
Large Language Models for Information Retrieval: A Survey
embedding layer during the training process. This approach allows LLMs to better adapt to the task of gener- ating pseudo-queries, striking a favorable balance between training cost and generation quality. In addition to the above studies, pseudo query gen- eration methods are also introduced in other application scenarios, such as conversational dense retrieval [105] and multilingual dense retrieval [129]. Relevance label generation. In some downstream tasks of retrieval, such as question-answering, the collection of questions is also sufficient. However, the relevance labels connecting these questions with the passages of support- ing evidence are very limited. In this context, leveraging the capability of LLMs for relevance label generation is a promising approach that can augment the training corpus for retrievers. A recent method, ART [127], exemplifies this approach. It first retrieves the top-relevant passages for each question. Then, it employs an LLM to produce the genera- tion probabilities of the question conditioned on these top passages. After a normalization process, these probabilities serve as soft relevance labels for the training of the retriever. Additionally, to highlight the similarities and differences among the corresponding methods, we present a compar- ative result in Table 3. It compares the aforementioned methods from various perspectives, including the number of examples, the generator employed, the type of synthetic data produced, the method applied to filter synthetic data, and whether LLMs are fine-tuned. This table serves to facilitate a clearer understanding of the landscape of these methods. # 4.2 Employing LLMs to Enhance Model Architecture Leveraging the excellent text encoding and decoding capa- bilities of LLMs, it is feasible to understand queries and doc- uments with greater precision compared to earlier smaller- sized models [59]. Researchers have endeavored to utilize LLMs as the foundation for constructing advanced retrieval models. These methods can be grouped into two categories, i.e., dense retrievers and generative retrievers. # 4.2.1 Dense Retriever In addition to the quantity and quality of the data, the representative capability of models also greatly influences the efficacy of retrievers.
2308.07107#54
2308.07107#56
2308.07107
[ "2305.03195" ]
2308.07107#56
Large Language Models for Information Retrieval: A Survey
Inspired by the LLMâ s excellent capability to encode and comprehend natural language, some researchers [130â 132] leverage LLMs as retrieval en- coders and investigate the impact of model scale on retriever performance. General Retriever. Since the effectiveness of retrievers pri- marily relies on the capability of text embedding, the evo- lution of text embedding models often has a significant impact on the progress of retriever development. In the era of LLMs, a pioneer work is made by OpenAI [130]. They view the adjacent text segments as positive pairs to facilitate the unsupervised pre-training of a set of text embedding models, denoted as cpt-text, whose parameter values vary from 300M to 175B. Experiments conducted on the MS MARCO [111] and BEIR [128] datasets indicate that larger model scales have the potential to yield improved performance in unsupervised learning and transfer learning for text search tasks. Nevertheless, pre-training LLMs from scratch is prohibitively expensive for most researchers. To overcome this limitation, some studies [131, 133] use pre- trained LLMs to initialize the bi-encoder of dense retriever. Specifically, GTR [133] adopts T5-family models, including T5-base, Large, XL, and XXL, to initialize and fine-tune dense retrievers. RepLLaMA [131] further fine-tunes the LLaMA model on multiple stages of IR, including retrieval and reranking. For the dense retrieval task, RepLLaMA appends an end-of-sequence token â
2308.07107#55
2308.07107#57
2308.07107
[ "2305.03195" ]
2308.07107#57
Large Language Models for Information Retrieval: A Survey
</s>â to the input sequences, i.e., queries or documents, and regards its output embeddings as the representation of queries or documents. The experiments confirm again that larger model sizes can lead to better performance, particularly in zero-shot settings. Notably, the researchers of RepLLaMA [131] also study the effectiveness of applying LLaMA in the reranking stage, which will be introduced in Section 5.1.3. Task-aware Retriever. While the aforementioned studies primarily focus on using LLMs as text embedding mod- els for downstream retrieval tasks, retrieval performance can be greatly enhanced when task-specific instructions are integrated. For example, TART [132] devises a task-aware retrieval model that introduces a task-specific instruction before the question. This instruction includes descriptions of the taskâ s intent, domain, and desired retrieved unit. For instance, given that the task is question-answering, an effective prompt might be â Retrieve a Wikipedia text that answers this question. {question}â
2308.07107#56
2308.07107#58
2308.07107
[ "2305.03195" ]
2308.07107#58
Large Language Models for Information Retrieval: A Survey
. Here, â Wikipediaâ (do- main) indicates the expected source of retrieved documents, â textâ (unit) suggests the type of content to retrieve, and â answers this questionâ (intent) demonstrates the intended relationship between the retrieved texts and the question. This approach can take advantage of the powerful language modeling capability and extensive knowledge of LLMs to precisely capture the userâ s search intents across various retrieval tasks. Considering the efficiency of retrievers, it first fine-tunes a TART-full model with cross-encoder archi- tecture, which is initialized from LLMs (e.g., T0-3B, Flan-T5). Then, a TART-dull model initialized from Contriever [134] is learned by distillating knowledge from the TART-full. # 4.2.2 Generative Retriever
2308.07107#57
2308.07107#59
2308.07107
[ "2305.03195" ]
2308.07107#59
Large Language Models for Information Retrieval: A Survey
Traditional IR systems typically follow the â index-retrieval- rankâ paradigm to locate relevant documents based on user queries, which has proven effective in practice. However, these systems usually consist of three separate modules: the index module, the retrieval module, and the reranking module. Therefore, optimizing these modules collectively 11 can be challenging, potentially resulting in sub-optimal retrieval outcomes. Additionally, this paradigm demands additional space for storing pre-built indexes, further bur- dening storage resources. Recently, model-based generative retrieval methods [135â 137] have emerged to address these challenges. These methods move away from the traditional â
2308.07107#58
2308.07107#60
2308.07107
[ "2305.03195" ]
2308.07107#60
Large Language Models for Information Retrieval: A Survey
index-retrieval-rankâ paradigm and instead use a unified model to directly generate document identifiers (i.e., Do- cIDs) relevant to the queries. In these model-based gener- ative retrieval methods, the knowledge of the document corpus is stored in the model parameters, eliminating the need for additional storage space for the index. Existing methods have explored generating document identifiers through fine-tuning and prompting of LLMs [138, 139] Fine-tuning LLMs. Given the vast amount of world knowl- edge contained in LLMs, it is intuitive to leverage them for building model-based generative retrievers. DSI [138] is a typical method that fine-tunes the pre-trained T5 models on retrieval datasets. The approach involves encoding queries and decoding document identifiers directly to perform re- trieval. They explore multiple techniques for generating document identifiers and find that constructing semantically structured identifiers yields optimal results. In this strategy, DSI applies hierarchical clustering to group documents ac- cording to their semantic embeddings and assigns a seman- tic DocID to each document based on its hierarchical group. To ensure the output DocIDs are valid and do represent actual documents in the corpus, DSI constructs a trie using all DocIDs and utilizes a constraint beam search during the decoding process. Furthermore, this approach observes that the scaling law, which suggests that larger LMs lead to improved performance, is also applied to generative retrievers. Prompting LLMs. In addition to fine-tuning LLMs for re- trieval, it has been found that LLMs (e.g., GPT-series models) can directly generate relevant web URLs for user queries with a few in-context demonstrations [139]. This unique capability of LLMs is believed to arise from their training exposure to various HTML resources. As a result, LLMs can naturally serve as generative retrievers that directly gener- ate document identifiers to retrieve relevant documents for input queries. To achieve this, an LLM-URL [139] model is proposed. It utilizes the GPT-3 text-davinci-003 model to yield candidate URLs. Furthermore, it designs regular expressions to extract valid URLs from these candidates to locate the retrieved documents. To provide a comprehensive understanding of this topic, Table 4 summarizes the common and unique characteristics of the LLM-based retrievers discussed above. # 4.3 Limitations
2308.07107#59
2308.07107#61
2308.07107
[ "2305.03195" ]
2308.07107#61
Large Language Models for Information Retrieval: A Survey
Though some efforts have been made for LLM-augmented retrieval, there are still many areas that require more de- tailed investigation. For example, a critical requirement for retrievers is fast response, while the main problem of existing LLMs is the huge model parameters and overlong inference time. Addressing this limitation of LLMs to ensure the response time of retrievers is a critical task. Moreover, even when employing LLMs to augment datasets (a context TABLE 4. The comparison of retrievers that leverage LLMs as the foundation. â KDâ is short for â Knowledge Distilla- tionâ . Methods Backbone Architecture LLMâ s tuning cpt-text [130] GPT-series GTR [133] T5 RepLLaMA [131] TART-full [132] LLAMA T0 & Flan-T5 TART-dual [132] Contriever DSI [138] LLM-URL [139] T5 GPT-3 Dense Dense Dense Dense Dense Generative Generative Pre-training Fine-tuning Pre-training & Fine-tuning Fine-tuning Fine-tuning & Prompting KD & Prompting Fine-tuning Prompting TABLE 5. Summary of existing LLM-based re-ranking meth- ods. â
2308.07107#60
2308.07107#62
2308.07107
[ "2305.03195" ]
2308.07107#62
Large Language Models for Information Retrieval: A Survey
Encâ and â Decâ denote encoder and decoder, respec- tively. Paradigm Type Method Supervised Rerankers Enc-only [140] Enc-dec Dec-only [131], [144], [145] [13], [141], [142], [143] Unsupervised Rerankers Pointwise [146], [147], [148], [149], [150], [151] Listwise Pairwise [152], [153], [154] [155], [156] Data Augmentation - [157], [158], [159], [160], [161], [162]
2308.07107#61
2308.07107#63
2308.07107
[ "2305.03195" ]
2308.07107#63
Large Language Models for Information Retrieval: A Survey
with lower inference time demands), the potential mismatch between LLM-generated texts and real user queries could impact retrieval effectiveness. Furthermore, as LLMs usu- ally lack domain-specific knowledge, they need to be fine- tuned on task-specific datasets before applying them to downstream tasks. Therefore, developing efficient strategies to fine-tune these LLMs with numerous parameters emerges as a key concern. # 5 RERANKER Reranker, as the second-pass document filter in IR, aims to rerank a document list retrieved by the retriever (e.g., BM25) based on the query-document relevance. Based on the usage of LLMs, the existing LLM-based reranking methods can be divided into three paradigms: utilizing LLMs as super- vised rerankers, utilizing LLMs as unsupervised rerankers, and utilizing LLMs for training data augmentation.
2308.07107#62
2308.07107#64
2308.07107
[ "2305.03195" ]
2308.07107#64
Large Language Models for Information Retrieval: A Survey
These paradigms are summarized in Table 5 and will be elaborated upon in the following sections. Recall that we will use the term document to refer to the text retrieved in general IR sce- narios, including instances such as passages (e.g., passages in MS MARCO passage ranking dataset [111]). # 5.1 Utilizing LLMs as Supervised Rerankers Supervised fine-tuning is an important step in applying pre-trained LLMs to a reranking task. Due to the lack of awareness of ranking during pre-training, LLMs cannot appropriately measure the query-document relevance and fully understand the reranking tasks. By fine-tuning LLMs on task-specific ranking datasets, such as the MS MARCO passage ranking dataset [111], which includes signals of
2308.07107#63
2308.07107#65
2308.07107
[ "2305.03195" ]
2308.07107#65
Large Language Models for Information Retrieval: A Survey
12 both relevance and irrelevance, LLMs can adjust their pa- rameters to yield better performance in the reranking tasks. Based on the backbone model structure, we can catego- rize existing supervised rerankers as: (1) encoder-only, (2) encoder-decoder, and (3) decoder-only. # 5.1.1 Encoder-only The encoder-based rerankers represent a significant turn- ing point in applying LLMs to document ranking tasks. They demonstrate how some pre-trained language models (e.g., BERT [59]) can be finetuned to deliver highly ac- curate relevance predictions. A representative approach is monoBERT [140], which transforms a query-document pair into a sequence â [CLS] query [SEP] document [SEP]â as the model input and calculates the relevance score by feeding the â [CLS]â representation into a linear layer.
2308.07107#64
2308.07107#66
2308.07107
[ "2305.03195" ]
2308.07107#66
Large Language Models for Information Retrieval: A Survey
The reranking model is optimized based on the cross-entropy loss. # 5.1.2 Encoder-Decoder In this field, existing studies mainly formulate document ranking as a generation task and optimize an encoder- decoder-based reranking model [13, 141-143]. Specifically, given the query and the document, reranking models are usually fine-tuned to generate a single token, such as â trueâ or â falseâ . During inference, the query-document relevance score is determined based on the logit of the generated token. For example, a T5 model can be fine-tuned to gen- erate classification tokens for relevant or irrelevant query- document pairs [13]. At inference time, a softmax function is applied to the logits of â
2308.07107#65
2308.07107#67
2308.07107
[ "2305.03195" ]
2308.07107#67
Large Language Models for Information Retrieval: A Survey
trueâ and â falseâ tokens, and the relevance score is calculated as the probability of the â trueâ token. The following method [141] involves a multi-view learning approach based on the T5 model. This approach simultaneously considers two tasks: generating classifica- tion tokens for a given query-document pair and generating the corresponding query conditioned on the provided doc- ument. DuoT5 [142] considers a triple (q, d;,d;) as the input of the T5 model and is fine-tuned to generate token â
2308.07107#66
2308.07107#68
2308.07107
[ "2305.03195" ]
2308.07107#68
Large Language Models for Information Retrieval: A Survey
trueâ if document d; is more relevant to query q; than document dj, and â falseâ otherwise. During inference, for each document d;, it enumerates all other documents d; and uses global aggregation functions to generate the relevance score s; for document d; (¢.g., 8; = dj Pi,j, Where p;,; represents the probability of generating â trueâ when taking (q,di,dj;) as the model input). Although these generative loss-based methods outper- form several strong ranking baselines, they are not op- timal for reranking tasks.
2308.07107#67
2308.07107#69
2308.07107
[ "2305.03195" ]
2308.07107#69
Large Language Models for Information Retrieval: A Survey
This stems from two primary reasons. First, it is commonly expected that a reranking model will yield a numerical relevance score for each query- document pair rather than text tokens. Second, compared to generation losses, it is more reasonable to optimize the reranking model using ranking losses (e.g., RankNet [163]). Recently, RankT5 [143] has directly calculated the relevance score for a query-document pair and optimized the ranking performance with â pairwiseâ or â listwiseâ ranking losses.
2308.07107#68
2308.07107#70
2308.07107
[ "2305.03195" ]
2308.07107#70
Large Language Models for Information Retrieval: A Survey
An avenue for potential performance enhancement lies in the substitution of the base-sized T5 model with its larger- scale counterpart. # 5.1.3 Decoder-only Recently, there have been some attempts [131, 144, 145] to rerank documents by fine-tuning decoder-only models (such as LLaMA). For example, RankLLaMA [131] pro- poses formatting the query-document pair into a prompt â query: {query} document: {document} [EOS]â and utilizes the last token representation for relevance calculation. Be- sides, RankingGPT [144] has been proposed to bridge the gap between LLMsâ conventional training objectives and the specific needs of document ranking through two-stage training. The first stage involves continuously pretraining LLMs using a large number of relevant text pairs col- lected from web resources, helping the LLMs to naturally generate queries relevant to the input document. The sec- ond stage focuses on improving the modelâ s text ranking performance using high-quality supervised data and well- designed loss functions. Different from these pointwise rerankers [131, 144], Rank-without-GPT [145] proposes to train a listwise reranker that directly outputs a reranked document list. The authors first demonstrate that existing pointwise datasets (such as MS MARCO [111]), which only contain binary query-document labels, are insufficient for training efficient listwise rerankers. Then, they propose to use the ranking results of existing ranking systems (such as Cohere rerank API) as gold rankings to train a listwise reranker based on Code-LLaMA-Instruct. # 5.2 Utilizing LLMs as Unsupervised Rerankers As the size of LLMs scales up (e.g., exceeding 10 billion pa- rameters), it becomes increasingly difficult to fine-tune the reranking model. Addressing this challenge, recent efforts have attempted to prompt LLMs to directly enhance docu- ment reranking in an unsupervised way. In general, these prompting strategies can be divided into three categories: pointwise, listwise, and pairwise methods. A comprehen- sive exploration of these strategies follows in the subsequent sections. # 5.2.1 Pointwise methods
2308.07107#69
2308.07107#71
2308.07107
[ "2305.03195" ]
2308.07107#71
Large Language Models for Information Retrieval: A Survey
The pointwise methods measure the relevance between a query and a single document, and can be categorized into two types: relevance generation [146, 147] and query generation [148â 150]. The upper part in Figure 6 (a) shows an example of relevance generation based on a given prompt, where LLMs output a binary label (â Yesâ or â Noâ ) based on whether the document is relevant to the query. Following [13], the query- document relevance score f (q, d) can be calculated based on the log-likelihood of token â
2308.07107#70
2308.07107#72
2308.07107
[ "2305.03195" ]
2308.07107#72
Large Language Models for Information Retrieval: A Survey
Yesâ and â Noâ with a softmax function: f (q, d) = exp(SY ) exp(SY ) + exp(SN ) , (1) where SY and SN represent the LLMâ s log-likelihood scores of â Yesâ and â Noâ respectively. In addition to binary labels, Zhuang et al. [147] propose to incorporate fine-grained relevance labels (e.g., â highly relevantâ , â somewhat rele- vantâ and â not relevantâ ) into the prompt, which helps LLMs more effectively differentiate among documents with varying levels of relevance to a query.
2308.07107#71
2308.07107#73
2308.07107
[ "2305.03195" ]
2308.07107#73
Large Language Models for Information Retrieval: A Survey
13 Document: #{document} Query: #{query} Does the document answer the Prompt Prompt The following are documents related to query #{query}. [1] #{document_1} Rank these documents based on their relevance to the query. { query? LLM LLM Output Output Yes / No [2] > [3] > [1] >... (Relevance Generation) (b) Listwise method Prompt Please write a query based on this document. Document: #{document} Given a query #{query}, which of the following two documents is more relevant to the query? Document 1: #{document_1}; Prompt Document 2: #{document_2} Query: Output Document 1 or Document 2: LLM LLM Output Output #{query} Document 1 / Document 2 (Query Generation) (a) Pointwise method (c) Pairwise method
2308.07107#72
2308.07107#74
2308.07107
[ "2305.03195" ]
2308.07107#74
Large Language Models for Information Retrieval: A Survey
Fig. 6. Three types of unsupervised reranking methods: (a) pointwise methods that consist of relevance generation (upper) and query generation (lower), (b) listwise methods, and (c) pairwise methods. As for the query generation shown in the lower part of Figure 6 (a), the query-document relevance score is deter- mined by the average log-likelihood of generating the actual query tokens based on the document: score = a > log p(qilg<i,d,P), (2) where |q| denotes the token number of query q, d denotes the document, and P represents the provided prompt. The documents are then reranked based on their relevance scores. It has been proven that some LLMs (such as T0) yield significant performance in zero-shot document rerank- ing based on the query generation method [148]. Recently, research [149] has also shown that the LLMs that are pre-trained without any supervised instruction fine-tuning (such as LLaMA) also yield robust zero-shot ranking ability. Although effective, these methods primarily rely on a handcrafted prompt (e.g., â
2308.07107#73
2308.07107#75
2308.07107
[ "2305.03195" ]
2308.07107#75
Large Language Models for Information Retrieval: A Survey
Please write a query based on this documentâ ), which may not be optimal. As prompt is a key factor in instructing LLMs to perform various NLP tasks, it is important to optimize prompt for better per- formance. Along this line, a discrete prompt optimization method Co-Prompt [150] is proposed for better prompt gen- eration in reranking tasks. Besides, PaRaDe [151] proposes a difficulty-based method to select few-show demonstrations to include in the prompt, proving significant improvements compared with zero-shot prompts. query and a document list into the prompt and instruct the LLMs to output the reranked document identifiers. Due to the limited input length of LLMs, it is not feasible to insert all candidate documents into the prompt. To alleviate this issue, these methods employ a sliding window strategy to rerank a subset of candidate documents each time. This strategy involves ranking from back to front using a sliding window, re-ranking only the documents within the window at a time. Although listwise methods have yielded promising per- formance, they still suffer from some weaknesses. First, according to the experimental results [152], only the GPT-4- based method can achieve competitive performance. When using smaller parameterized language models (e.g., FLAN- UL2 with 20B parameters), listwise methods may produce very few usable results and underperform many supervised methods. Second, the performance of listwise methods is highly sensitive to the document order in the prompt. When the document order is randomly shuffled, listwise methods perform even worse than BM25 [152], revealing positional bias issues in the listwise ranking of LLMs. To alleviate this issue, Tang et al. [154] introduce a permutation self- consistency method, which involves shuffling the list in the prompt and aggregating the generated results to achieve a more accurate and unbiased ranking. # 5.2.3 Pairwise Methods Note that these pointwise methods rely on accessing the output logits of LLMs to calculate the query-document relevance scores. As a result, they are not applicable to closed-sourced LLMs, whose API-returned results do not include logits. # 5.2.2 Listwise Methods Listwise methods [152, 153] aim to directly rank a list of documents (see Figure 6 (b)). These methods insert the
2308.07107#74
2308.07107#76
2308.07107
[ "2305.03195" ]
2308.07107#76
Large Language Models for Information Retrieval: A Survey
In pairwise methods [155], LLMs are given a prompt that consists of a query and a document pair (see Figure 6 (c)). Then, they are instructed to generate the identifier of the document with higher relevance. To rerank all candidate documents, aggregation methods like AllPairs are used. AllPairs first generates all possible document pairs and ag- gregates a final relevance score for each document. To speed up the ranking process, efficient sorting algorithms, such as heap sort and bubble sort, are usually employed [155].
2308.07107#75
2308.07107#77
2308.07107
[ "2305.03195" ]
2308.07107#77
Large Language Models for Information Retrieval: A Survey
14 15 TABLE 6. The comparison between different methods. N denotes the number of documents to rerank. The Complexity, Logits, and Batch represent the computational complexity, whether accesses LLMâ s logits, and whether allows batch inference respectively. k is the constant in sliding windows strategy. As for the Performance, we use NDCG@10 as a metric, and the results are calculated by reranking the top 100 documents retrieved by BM25 on TREC-DL2019 and TREC-DL2020. The best model is in bold while the second-best is marked with an underline. The results come from previous study [155]. *Since the parameters of ChatGPT have not been released, its model parameters are based on public estimates [164]. Methods LLM Size Properties Performance Complexity Logits Batching TREC-DL19 TREC-DL20 Initial Retriever Supervised BM25 monoBERT [140] monoT5 [13] RankT5 [143] - BERT T5 T5 - 340M 220M 3B - - - - - â
2308.07107#76
2308.07107#78
2308.07107
[ "2305.03195" ]
2308.07107#78
Large Language Models for Information Retrieval: A Survey
â â - â â â 50.58 70.50 71.48 71.22 47.96 67.28 66.99 69.49 Unsupervised-Pointwise Unsupervised-Listwise Unsupervised-Pairwise Query Generation [148] FLAN-UL2 Relevance Generation [146] FLAN-UL2 RankGPT3.5 [152] RankGPT4 [152] PRP-Allpair [155] PRP-Heapsort [155] 20B 20B gpt-3.5-turbo 154B* gpt-4 FLAN-UL2 FLAN-UL2 1T* 20B 20B O(N ) O(N ) O(k â N ) O(k â N ) O(N 2) O(N â logN ) â â â â â â â
2308.07107#77
2308.07107#79
2308.07107
[ "2305.03195" ]
2308.07107#79
Large Language Models for Information Retrieval: A Survey
58.95 64.61 65.80 75.59 72.42 71.88 60.02 65.39 62.91 70.56 70.68 69.43 These sorting algorithms utilize efficient data structures to compare document pairs selectively and elevate the most relevant documents to the top of the ranking list, which is particularly useful in top-k ranking. Experimental re- sults show the state-of-the-art performance on the standard benchmarks using moderate-size LLMs (e.g., Flan-UL2 with 20B parameters), which are much smaller than those typi- cally employed in listwise methods (e.g., GPT3.5). Although effective, pairwise methods still suffer from high time complexity. To alleviate the efficiency problem, a setwise approach [156] has been proposed to compare a set of documents at a time and select the most relevant one from them. This approach allows the sorting algorithms (such as heap sort) to compare more than two documents at each step, thereby reducing the total number of comparisons and speeding up the sorting process. # 5.2.4 Comparison and Discussion In this part, we will compare different unsupervised meth- ods from various aspects to better illustrate the strengths and weaknesses of each method, which is summarized in Table 6. We choose representative methods [146, 148, 152, 155] in pointwise, listwise and pairwise ranking, and in- clude several supervised methods [13, 140, 143] mentioned in Section 5.1 for performance comparison. # 5.3 Utilizing LLMs for Training Data Augmentation Furthermore, in the realm of reranking, researchers have explored the integration of LLMs for training data aug- mentation [157â
2308.07107#78
2308.07107#80
2308.07107
[ "2305.03195" ]
2308.07107#80
Large Language Models for Information Retrieval: A Survey
162]. For example, ExaRanker [157] gener- ates explanations for retrieval datasets using GPT-3.5, and subsequently trains a seq2seq ranking model to generate relevance labels along with corresponding explanations for given query-document pairs. InPars-Light [158] is proposed as a cost-effective method to synthesize queries for docu- ments by prompting LLMs. Contrary to InPars-Light [158], a new dataset ChatGPT-RetrievalQA [159] is constructed by generating synthetic documents based on LLMs in response to user queries. Recently, many studies [160â 162] have also attempted to distill the document ranking capability of LLMs into a specialized model. RankVicuna [160] proposes to use the ranking list of RankGPT3.5 [152] as the gold list to train a 7B parameter Vicuna model. RankZephyr [161] introduces a two-stage training strategy for distillation: initially applying the RankVicuna recipe to train Zephyrγ in the first stage, and then further finetuning it in the second stage with the ranking results from RankGPT4. These two studies not only demonstrate competitive results but also alleviate the issue of ranking results non-reproducibility of black-box LLMs. Besides, researchers [162] have also tried to distill the rank- ing ability of a pairwise ranker, which is computationally demanding, into a simpler but more efficient pointwise ranker. The pointwise methods (Query Generation and Rel- evance Generation) judge the relevance of each query- document pair independently, thus offering lower time com- plexity and enabling batch inference. However, compared to other methods, it does not have an advantage in terms of performance. The listwise method yields significant per- formance especially when calling GPT-4, but suffers from expensive API cost and non-reproducibility [160]. Com- pared with the listwise method, the pairwise method shows competitive results based on a much smaller model FLAN- UL2 (20B). Stemming from the necessity to compare an extensive number of document pairs, its primary drawback is low efficiency. # 5.4 Limitations Although recent research on utilizing LLMs for document reranking has made significant progress, it still faces some challenges.
2308.07107#79
2308.07107#81
2308.07107
[ "2305.03195" ]
2308.07107#81
Large Language Models for Information Retrieval: A Survey
For example, considering the cost and efficiency, minimizing the number of calls to LLM APIs is a problem worth studying. Besides, while existing studies mainly focus on applying LLMs to open-domain datasets (such as MS- MARCO [111]) or relevance-based text ranking tasks, their adaptability to in-domain datasets [128] and non-standard ranking datasets [165] remains an area that demands more comprehensive exploration. 6 READER With the impressive capabilities of LLMs in understanding, extracting, and processing textual data, researchers explore expanding the scope of IR systems beyond content ranking to answer generation. In this evolution, a reader module has been introduced to generate answers based on the document corpus in IR systems. By integrating a reader module, IR systems can directly present conclusive passages to users. Compared with providing a list of documents, users can simply comprehend the answering passages instead of ana- lyzing the ranking list in this new paradigm. Furthermore, by repeatedly providing documents to LLMs based on their generating texts, the final generated answers can potentially be more accurate and information-rich than the original retrieved lists. A naive strategy for implementing this function is to heuristically provide LLMs with documents relevant to the user queries or the previously generated texts to support the following generation. However, this passive approach limits LLMs to merely collecting documents from IR systems without active engagement. An alternative solution is to train LLMs to interact proactively with search engines. For example, LLMs can formulate their own queries instead of relying solely on user queries or generated texts for references. According to the way LLMs utilize IR systems in the reader module, we can categorize them into passive readers and active readers. Each approach has its advantages and challenges for implementing LLM-powered answer generation in IR systems. Furthermore, since the documents provided by upstream IR systems are sometimes too long to directly feed as input for LLMs, some compression modules are proposed to extractively or abstractively compress the retrieved contexts for LLMs to understand and generate an- swers for queries. We will present these reader and compres- sor modules in the following parts and briefly introduce the existing analysis work on retrieval-augmented generation strategy and their applications. # 6.1 Passive Reader To generate answers for users, a straightforward strategy is to supply the retrieved documents according to the queries or previously generated texts from IR systems as inputs to LLMs for creating passages [23, 166â
2308.07107#80
2308.07107#82
2308.07107
[ "2305.03195" ]
2308.07107#82
Large Language Models for Information Retrieval: A Survey
171, 173, 175, 176, 178â 180]. By this means, these approaches use the LLMs and IR systems separately, with LLMs functioning as passive recipients of documents from the IR systems. The strategies for utilizing LLMs within IR systemsâ reader modules can be categorized into the following three groups according to the frequency of retrieving documents for LLMs. # 6.1.1 Once-Retrieval Reader To obtain useful references for LLMs to generate responses for user queries, an intuitive way is to retrieve the top doc- uments based on the queries themselves in the beginning. For example, REALM [166] adopts this strategy by directly attending the document contents to the original queries to predict the final answers based on masked language modeling. RAG [167] follows this strategy but applies the generative language modeling paradigm. However, these two approaches only use language models with limited parameters, such as BERT and BART. Recent approaches such as REPLUG [168] and Atlas [169] have improved them by leveraging LLMs such as GPTs, T5s, and LLaMAs for response generation. To yield better answer generation performances, these models usually fine-tune LLMs on QA tasks. However, due to the limited computing resources, many methods [170, 171, 179] choose to prompt LLMs for generation as they could use larger LMs in this way. Fur- thermore, to improve the quality of the generated answers, several approaches [172, 181] also try to train or prompt the LLMs to generate contexts such as citations or notes in addition to the answers to force LLMs to understand and assess the relevance of retrieved passages to the user queries. Some approaches [180] evaluate the importance of each retrieved reference using policy gradients to indicate which reference is more useful for generating. Besides, researchers explore instruction tuning LLMs such LLaMAs to improve their abilities to generate conclusive passages relying on retrieved knowledge [182, 183]. # 6.1.2 Periodic-Retrieval Reader However, while generating long conclusive answers, it is shown [23, 173] that only using the references retrieved by the original user intents as in once-retrieval readers may be inadequate. For example, when providing a pas- sage about â
2308.07107#81
2308.07107#83
2308.07107
[ "2305.03195" ]
2308.07107#83
Large Language Models for Information Retrieval: A Survey
Barack Obamaâ , language models may need additional knowledge about his university, which may not be included in the results of simply searching the initial query. In conclusion, language models may need extra references to support the following generation during the generating process, where multiple retrieval processes may be required. To address this, solutions such as RETRO [23] and RALM [173] have emerged, emphasizing the periodic collection of documents based on both the original queries and the concurrently generated texts (triggering a retrieval every n generated tokens). In this manner, when generating the text about the university career of Barack Obama, the LLM can receive additional documents as supplementary materials. This need for additional references highlights the necessity for multiple retrieval iterations to ensure robust- ness in subsequent answer generation. Notably, RETRO [23] introduces a novel approach incorporating cross-attention between the generating texts and the references within the Transformer attention calculation, as opposed to directly embedding references into the input texts of LLMs. Since it involves additional cross-attention modules in the Trans- formerâ s structure, RETRO trains this model from scratch. However, these two approaches mainly rely on the suc- cessive n tokens to separate generation and retrieve docu- ments, which may not be semantically continuous and may cause the collected references noisy and useless. To solve this problem, some approaches such as IRCoT [175] also explore retrieving documents for every generated sentence, which is a more complete semantic structure. Furthermore, researchers find that the whole generated passages can be considered as conclusive contexts for current queries and can be used to find more relevant knowledge to gener- ate more thorough answers. Consequently, many recent approaches [174, 184, 185] have also tried to extend this periodic-retrieval paradigm to iteratively using the whole generated passages to retrieve references to re-generate the
2308.07107#82
2308.07107#84
2308.07107
[ "2305.03195" ]
2308.07107#84
Large Language Models for Information Retrieval: A Survey
16 TABLE 7. The comparison of existing representative methods that have a passive reader module. REALM and RAG do not use LLMs, but their frameworks have been widely applied in many following approaches. Methods Backbone models Where to incorporate retrieval When to retrieve How to use LLMs REALM [166] RAG [167] REPLUG [168] Atlas [169] Lazaridou et al. [170] He et al. [171] Chain-of-Note [172] RALM [173] RETRO [23] ITERGEN [174] IRCoT [175] FLARE [176] Self-RAG [177] BERT BART GPT T5 Gopher GPT LLaMA LLaMA & OPT & GPT Transformer GPT Flan-T5 & GPT GPT LLaMA Input layer Input layer Input layer Input layer Input layer Input layer Input layer Input layer Attention layer Input layer Input layer Input layer Input layer In the beginning In the beginning In the beginning In the beginning In the beginning In the beginning In the beginning During generation (every n tokens) During generation (every n tokens) Training from scratch During generation (every answer) During generation (every sentence) During generation (aperiodic) During generation (aperiodic) Fine-tuning Fine-tuning Fine-tuning Fine-tuning Prompting Prompting Fine-tuning Prompting Prompting Prompting Prompting Fine-tuning answers, until the iterations reach a pre-defined limita- tion. Particularly, these methods can be regarded as special periodic-retrieval readers that retrieve passages when every answer is (re)-generated. Since the LLMs can receive more comprehensive and relevant references with the iterations increase, these methods that combine retrieval-augmented- generation and generation-augmented-retrieval strategies can generate more accurate answers but consume more computation costs. # 6.1.3 Aperiodic-Retrieval Reader In the above strategy, the retrieval systems supply docu- ments to LLMs in a periodic manner. However, retrieving documents in a mandatory frequency may mismatch the retrieval timing and can be costly. Recently, FLARE [176] has addressed this problem by automatically determining the timing of retrieval according to the probability of generating texts. Since the probability can serve as an indicator of LLMsâ
2308.07107#83
2308.07107#85
2308.07107
[ "2305.03195" ]
2308.07107#85
Large Language Models for Information Retrieval: A Survey
confidence during text generation [186, 187], a low probability for a generated term could suggest that LLMs require additional knowledge. Specifically, when the proba- bility of a term falls below a predefined threshold, FLARE employs IR systems to retrieve references in accordance with the ongoing generated sentences, while removing these low-probability terms. FLARE adopts this strategy of prompting LLMs for answer generation solely based on the probabilities of generating terms, avoiding the need for fine- tuning while still maintaining effectiveness. Besides, self- RAG [177] tends to solve this problem by training LLMs such as LlaMA to generate specific tokens when they need additional knowledge to support following generations. Another critical model is introduced to judge whether the retrieved references are beneficial for generating. IR systems in a manner akin to human interaction such as issuing queries to seek information. To allow LLMs to actively use search engines, Self- Ask [188] and DSP [189] try to employ few-shot prompts for LLMs, triggering them to search queries when they believe it is required. For example, in a scenario where the query is â
2308.07107#84
2308.07107#86
2308.07107
[ "2305.03195" ]
2308.07107#86
Large Language Models for Information Retrieval: A Survey
When was the existing tallest wooden lattice tower built?â , these prompted LLMs can decide to search a query â What is the existing tallest wooden lattice towerâ to gather neces- sary references as they find the query cannot be directly answered. Once acquired information about the tower, they can iteratively query IR systems for more details until they determine to generate the final answers instead of asking questions. Notably, these methods involve IR systems to construct a single reasoning chain for LLMs. MRC [190] fur- ther improves these methods by prompting LLMs to explore multiple reasoning chains and subsequently combining all generated answers using LLMs.
2308.07107#85
2308.07107#87
2308.07107
[ "2305.03195" ]
2308.07107#87
Large Language Models for Information Retrieval: A Survey
# 6.3 Compressor Existing LLMs, especially open-sourced ones, such as LLaMA and Flan-T5, have limited input lengths (usually 4,096 or 8,192 tokens). However, the documents or web pages retrieved by upstream IR systems are usually long. Therefore, it is difficult to concatenate all the retrieved documents and feed them into LLMs to generate answers. Though some approaches manage to solve these problems by aggregating the answers supported by each reference as the final answers, this strategy neglects the potential rela- tions between retrieved passages. A more straightforward way is to directly compress the retrieved documents into short input tokens or even dense vectors [191â
2308.07107#86
2308.07107#88
2308.07107
[ "2305.03195" ]
2308.07107#88
Large Language Models for Information Retrieval: A Survey
194]. We summarize representative passive reader approaches in Table 7, considering various aspects such as the backbone language models, the insertion point for retrieved refer- ences, the timing of using retrieval models, and the tuning strategy employed for LLMs. # 6.2 Active Reader However, the passive reader-based approaches separate IR systems and generative language models. This signifies that LLMs can only submissively utilize references provided by IR systems and are unable to interactively engage with the To compress the retrieved references, an intuitive idea is to extract the most useful K sentences from the retrieved documents. LeanContext [191] applies this method and trains a small model by reinforcement learning (RL) to select the top K similar sentences to the queries. The researchers also augment this strategy by using a free open-sourced text reduction method for the rest sentences as a supplement. Instead of using RL-based methods, RECOMP [192] directly uses the probability or the match ratio of the generated answers to the golden answers as signals to build training datasets and tune the compressor model. For example, the sentence corresponding to the highest generating proba-
2308.07107#87
2308.07107#89
2308.07107
[ "2305.03195" ]
2308.07107#89
Large Language Models for Information Retrieval: A Survey
17 bility is the positive one while others are negative ones. Furthermore, FILCO [193] applies the â hindsightâ methods, which directly align the prior distribution (the predicted importance probability distribution of sentences without knowing the gold answer) to the posterior distribution (the same distribution of sentences within knowing the gold answer) to tune language models to select sentences. However, these extractive methods may lose potential intent among all references. Therefore, abstractive methods are proposed to summarize retrieved documents into short but concise summaries for downstream generation. These methods [192, 194] usually distill the summarizing abili- ties of LLMs to small models. For example, TCRA [194] leverages GPT-3.5-turbo to build abstractive compression datasets for MT5 model. # 6.4 Analysis With the rapid development of the above reader approaches, many researchers have begun to analyze the characteristics of retrieval-augmented LLMs:
2308.07107#88
2308.07107#90
2308.07107
[ "2305.03195" ]
2308.07107#90
Large Language Models for Information Retrieval: A Survey
â ¢ Liu et al. [195] find that the position of the rele- vant/golden reference has significant influences on the final generation performance. The performance is always better when the relevant reference is at the beginning or the end, which indicates the necessity of introducing a ranking module to order the retrieved knowledge. â ¢ Ren et al. [196] observe that by applying retrieval augmentation generation strategy, LLMs can have a better awareness of their knowledge boundaries. â ¢ Liu et al. [197] analyze different strategies of integrat- ing retrieval systems and LLMs such as concatenate (i.e., concatenating all references for answer generation) and post fusion (i.e., aggregating the answers corresponding to each reference). They also explore several ways of combining these two strategies.
2308.07107#89
2308.07107#91
2308.07107
[ "2305.03195" ]
2308.07107#91
Large Language Models for Information Retrieval: A Survey
â ¢ Aksitov et al. [198] demonstrate that there exists an attribution and fluency tradeoff for retrieval-augmented LLMs: with more received references, the attribution of generated answers increases while the fluency decreases. â ¢ Mallen et al. [199] argue that always retrieving ref- erences to support LLMs to generate answers hurts the question-answering performance. The reason is that LLMs themselves may have adequate knowledge while answering questions about popular entities and the retrieved noisy passages may interfere and bias the answering process. To overcome this challenge, they devise a simple strategy that only retrieves references while the popularity of entities in the query is quite low. By this means, the efficacy and efficiency of retrieval-augmented generation both improve.
2308.07107#90
2308.07107#92
2308.07107
[ "2305.03195" ]
2308.07107#92
Large Language Models for Information Retrieval: A Survey
# 6.5 Applications Recently, researchers [200â 205] have applied the retrieval- augmented generation strategy to areas such as clinical QA, medical QA, and financial QA to enhance LLMs with exter- nal knowledge and to develop domain-specific applications. For example, ATLANTIC [201] adapts Atlas to the scien- tific domain to derive a science QA system. Besides, some approaches [206] also apply techniques in federated learn- ing such as multi-party computation to perform personal retrieval-augmented generation with privacy protection. to better facilitate the deployment of these retrieval-augmented generation systems, some tools or frameworks are proposed [178, 207, 208]. For example, RETA-LLM [178] breaks down the whole complex gen- eration task into several simple modules in the reader pipeline. These modules include a query rewriting module for refining query intents, a passage extraction module for aligning reference lengths with LLM limitations, and a fact verification module for confirming the absence of fabricated information in the generated answers. # 6.6 Limitations Several IR systems applying the retrieval-augmented gen- eration strategy, such as New Bing and Langchain, have already entered commercial use. However, there are also some challenges in this novel retrieval-augmented content generation system. These include challenges such as effec- tive query reformulation, optimal retrieval frequency, cor- rect document comprehension, accurate passage extraction, and effective content summarization. It is crucial to address these challenges to effectively realize the potential of LLMs in this paradigm. 7 SEARCH AGENT With the development of LLMs, IR systems are also facing new changes. Among them, developing LLMs as intelli- gent agents has attracted more and more attention. This conceptual shift aims to mimic human browsing patterns, thereby enhancing the capability of these models to handle complex retrieval tasks. Empowered by the advanced nat- ural language understanding and generation capabilities of LLMs, these agents can autonomously search, interpret, and synthesize information from a wide range of sources. One way to achieve this ability is to design a pipeline that combines a series of modules and assigns different roles to them. Such a pre-defined pipeline mimics usersâ behaviors on the web by breaking it into several sub-tasks which are performed by different modules. However, this kind of static agent cannot deal with the complex nature of usersâ
2308.07107#91
2308.07107#93
2308.07107
[ "2305.03195" ]
2308.07107#93
Large Language Models for Information Retrieval: A Survey
behavior sequences on the web and may face challenges when interacting with real-world environments. An alternative solution is to allow LLMs to freely explore the web and make interactions themselves, namely letting the LLM itself decide what action it will take next based on the feedback from the environment (or humans). These agents have more flexibility and act more like human beings. # 7.1 Static Agent To mimic human search patterns, a straightforward ap- proach is to design a static system to browse the web and synthesize information step by step [209â 214]. By breaking the information-seeking process into multiple subtasks, they design a pipeline that contains various LLM-based modules in advance and assigns different subtasks to them. LaMDA [209] serves as an early work of the static agent. It consists of a family of Transformer-based neural language models specialized for dialog, with up to 137B parameters, pre-trained on 1.56T tokens from public dialogue data and web text.
2308.07107#92
2308.07107#94
2308.07107
[ "2305.03195" ]
2308.07107#94
Large Language Models for Information Retrieval: A Survey
The study emphasizes the modelâ s development 18 through a static pipeline, encompassing large-scale pre- training, followed by strategic fine-tuning stages aimed at enhancing three critical aspects: dialogue quality, safety, and groundedness. It can integrate external IR systems for factual grounding. This integration allows LaMDA to access and use external and authoritative sources when generat- ing responses. SeeKeR [210] also incorporates the Internet search into its modular architecture for generating more fac- tual responses. It performs three sequential tasks: generating a search query, generating knowledge from search results, and generating a final response. GopherCite [213] uses a search engine like Google Search to find relevant sources. It then synthesizes a response that includes verbatim quotes from these sources as evidence, aligning the Gopherâ s out- put with verified information.
2308.07107#93
2308.07107#95
2308.07107
[ "2305.03195" ]
2308.07107#95
Large Language Models for Information Retrieval: A Survey
WebAgent [212] develops a series of tasks, including instruction decomposition and planning, action programming, and HTML summarization. It can navigate the web, understand and synthesize infor- mation from multiple sources, and execute web-based tasks, effectively functioning as an advanced search and interac- tion agent. WebGLM [211] designs an LLM-augmented re- triever, a bootstrapped generator, and a human preference- aware scorer. These components work together to provide accurate web-enhanced question-answering capabilities that are sensitive to human preferences. Shi et al. [214] focus on enhancing the relevance, responsibility, and trustworthiness of LLMs in web search applications via an intent-aware gen- erator, an evidence-sensitive validator, and a multi-strategy supported optimizer. # 7.2 Dynamic Agent Instead of statically arranging LLMs in a pipeline, We- bGPT [24] takes an alternate approach by training LLMs to use search engines automatically. This is achieved through the application of a reinforcement learning framework, within which a simulated environment is constructed for GPT-3 models. Specifically, the WebGPT model employs special tokens to execute actions such as querying, scrolling through rankings, and quoting references on search en- gines. This innovative approach allows the GPT-3 model to use search engines for text generation, enhancing the reliability and real-time capability of the generated texts. A following study [215] has extended this paradigm to the domain of Chinese question answering. Besides, some works develop important benchmarks for interactive web- based agents [216â 218]. For example, WebShop [217] aims to provide a scalable, interactive web-based environment for language understanding and decision-making, focusing on the task of online shopping. ASH (Actor-Summarizer- Hierarchical) prompting [219] significantly enhances the ability of LLMs on WebShop benchmark. It first takes a raw observation from the environment and produces a new, more meaningful representation that aligns with the specific goal. Then, it dynamically predicts the next action based on the summarized observation and the interaction history.
2308.07107#94
2308.07107#96
2308.07107
[ "2305.03195" ]
2308.07107#96
Large Language Models for Information Retrieval: A Survey
# 7.3 Limitations Though the aspect of static search agents has been thor- oughly studied, the literature on dynamic search agents remains limited. Some agents may lack mechanisms for real-time fact-checking or verification against authoritative sources, leading to the potential dissemination of misinfor- mation. Moreover, since LLMs are trained on data from the Internet, they may inadvertently perpetuate biases present in the training data. This can lead to biased or offensive outputs and may collect unethical content from the web. Finally, as LLMs process user queries, there are concerns regarding user privacy and data security, especially if sensi- tive or personal information is involved in the queries. 8 FUTURE DIRECTION In this survey, we comprehensively reviewed recent ad- vancements in LLM-enhanced IR systems and discussed their limitations. Since the integration of LLMs into IR systems is still in its early stages, there are still many opportunities and challenges. In this section, we summarize the potential future directions in terms of the four modules in an IR system we just discussed, namely query rewriter, retriever, reranker, and reader. In addition, as evaluation has also emerged as an important aspect, we will also introduce the corresponding research problems that need to be addressed in the future. Another discussion about important research topics on applying LLMs to IR can be found in a recent perspective paper [53].
2308.07107#95
2308.07107#97
2308.07107
[ "2305.03195" ]
2308.07107#97
Large Language Models for Information Retrieval: A Survey
# 8.1 Query Rewriter LLMs have enhanced query rewriting for both ad-hoc and conversational search scenarios. Most of the existing meth- ods rely on prompting LLMs to generate new queries. While yielding remarkable results, the refinement of rewriting quality and the exploration of potential application scenar- ios require further investigation. â ¢ Rewriting queries according to ranking performance. A typical paradigm of prompting-based methods is providing LLMs with several ground-truth rewriting cases (optional) and the task description of query rewriting. Despite LLMs being capable of identifying potential user intents of the query [220], they lack awareness of the resulting retrieval quality of the rewritten query. The absence of this connec- tion can result in rewritten queries that seem correct yet pro- duce unsatisfactory ranking results. Although some existing studies have used reinforcement learning to adjust the query rewriting process according to generation results [100], a substantial realm of research remains unexplored concern- ing the integration of ranking results.
2308.07107#96
2308.07107#98
2308.07107
[ "2305.03195" ]
2308.07107#98
Large Language Models for Information Retrieval: A Survey
â ¢ Improving query rewriting in conversational search. As yet, primary efforts have been made to improve query rewriting in ad-hoc search. In contrast, conversational search presents a more developed landscape with a broader scope for LLMs to contribute to query understanding. By incorporating historical interactive information, LLMs can adapt system responses based on user preferences, providing a more effective conversational experience. However, this potential has not been explored in depth. In addition, LLMs could also be used to simulate user behavior in conversational search scenarios, providing more training data, which are urgently needed in current research.
2308.07107#97
2308.07107#99
2308.07107
[ "2305.03195" ]
2308.07107#99
Large Language Models for Information Retrieval: A Survey
â ¢ Achieving personalized query rewriting. LLMs offer valu- able contributions to personalized search through their ca- pacity to analyze user-specific data. In terms of query rewrit- ing, with the excellent language comprehension ability of 19 LLMs, it is possible to leverage them to build user profiles based on usersâ search histories (e.g., issued queries, click- through behaviors, and dwell time). This empowers the achievement of personalized query rewriting for enhanced IR and finally benefits personalized search or personalized recommendation. # 8.2 Retriever Leveraging LLMs to improve retrieval models has received considerable attention, promising an enhanced understand- ing of queries and documents for improved ranking per- formance.
2308.07107#98
2308.07107#100
2308.07107
[ "2305.03195" ]
2308.07107#100
Large Language Models for Information Retrieval: A Survey
However, despite strides in this field, several challenges and limitations still need to be investigated in the future: â ¢ Reducing the latency of LLM-based retrievers. LLMs, with their massive parameters and world knowledge, often entail high latency during the inferring process. This delay poses a significant challenge for practical applications of LLM-based retrievers, as search engines require in-time responses. To address this issue, promising research directions include transferring the capabilities of LLMs to smaller models, exploring quantization techniques for LLMs in IR tasks, and so on.
2308.07107#99
2308.07107#101
2308.07107
[ "2305.03195" ]
2308.07107#101
Large Language Models for Information Retrieval: A Survey
â ¢ Simulating realistic queries for data augmentation. Since the high latency of LLMs usually blocks their online applica- tion for retrieval tasks, many existing studies have leveraged LLMs to augment training data, which is insensitive to inference latency. Existing methods that leverage LLMs for data augmentation often generate queries without aligning them with real user queries, leading to noise in the training data and limiting the effectiveness of retrievers. As a conse- quence, exploring techniques such as reinforcement learning to enable LLMs to simulate the way that real queries are issued holds the potential for improving retrieval tasks.
2308.07107#100
2308.07107#102
2308.07107
[ "2305.03195" ]
2308.07107#102
Large Language Models for Information Retrieval: A Survey
â ¢ Incremental indexing for generative retrieval. As elabo- rated in Section 4.2.2, the emergence of LLMs has paved the way for generative retrievers to generate document identifiers for retrieval tasks. This approach encodes doc- ument indexes and knowledge into the LLM parameters. However, the static nature of LLM parameters, coupled with the expensive fine-tuning costs, poses challenges for updating document indexes in generative retrievers when new documents are added. Therefore, it is crucial to explore methods for constructing an incremental index that allows for efficient updates in LLM-based generative retrievers.
2308.07107#101
2308.07107#103
2308.07107
[ "2305.03195" ]
2308.07107#103
Large Language Models for Information Retrieval: A Survey
â ¢ Supporting multi-modal search. Web pages usually con- tain multi-modal information, including texts, images, au- dios, and videos. However, existing LLM-enhanced IR sys- tems mainly support retrieval for text-based content. A straightforward solution is to replace the backbone with multi-modal large models, such as GPT-4 [80]. However, this undoubtedly increases the cost of deployment. A promising yet challenging direction is to combine the language un- derstanding capability of LLMs with existing multi-modal retrieval models. By this means, LLMs can contribute their language skills in handling different types of content. # 8.3 Reranker In Section 5, we have discussed the recent advanced tech- niques of utilizing LLMs for the reranking task. Some poten- tial future directions in reranking are discussed as follows.
2308.07107#102
2308.07107#104
2308.07107
[ "2305.03195" ]
2308.07107#104
Large Language Models for Information Retrieval: A Survey
â ¢ Enhancing the online availability of LLMs. Though effec- tive, many LLMs have a massive number of parameters, making it challenging to deploy them in online applications. Besides, many reranking methods [152, 153] rely on calling LLM APIs, incurring considerable costs. Consequently, de- vising effective approaches (such as distilling to small mod- els) to enhance the online applicability of LLMs emerges as a research direction worth exploring.
2308.07107#103
2308.07107#105
2308.07107
[ "2305.03195" ]
2308.07107#105
Large Language Models for Information Retrieval: A Survey
â ¢ Improving personalized search. Many existing LLM-based reranking methods mainly focus on the ad-hoc reranking task. However, by incorporating user-specific information, LLMs can also improve the effectiveness of the personalized reranking task. For example, by analyzing usersâ search his- tory, LLMs can construct accurate user profiles and rerank the search results accordingly, providing personalized re- sults with higher user satisfaction. â ¢ Adapting to diverse ranking tasks. In addition to doc- ument reranking, there are also other ranking tasks, such as response ranking, evidence ranking, entity ranking and etc., which also belong to the universal information access system. Navigating LLMs towards adeptness in these di- verse ranking tasks can be achieved through specialized methodologies, such as instruction tuning. Exploring this avenue holds promise as an intriguing and valuable re- search trajectory.
2308.07107#104
2308.07107#106
2308.07107
[ "2305.03195" ]
2308.07107#106
Large Language Models for Information Retrieval: A Survey
# 8.4 Reader With the increasing capabilities of LLMs, the future inter- action between users and IR systems will be significantly changed. Due to the powerful natural language processing and understanding capabilities of LLMs, the traditional search paradigm of providing ranking results is expected to be progressively replaced by synthesizing conclusive an- swering passages for user queries using the reader module. Although such strategies have already been investigated by academia and facilitated by industry as we stated in Section 6, there still exists much room for exploration.
2308.07107#105
2308.07107#107
2308.07107
[ "2305.03195" ]
2308.07107#107
Large Language Models for Information Retrieval: A Survey
â ¢ Improving the reference quality for LLMs. To support answer generation, existing approaches usually directly feed the retrieved documents to the LLMs as references. How- ever, since a document usually covers many topics, some passages in it may be irrelevant to the user queries and can introduce noise during LLMsâ generation. Therefore, it is necessary to explore techniques for extracting relevant snip- pets from retrieved documents, enhancing the performance of retrieval-augmented generation. â ¢ Improving the answer reliability of LLMs. Incorporat- ing the retrieved references has significantly alleviated the â
2308.07107#106
2308.07107#108
2308.07107
[ "2305.03195" ]
2308.07107#108
Large Language Models for Information Retrieval: A Survey
hallucinationâ problem of LLMs. However, it remains un- certain whether the LLMs refer to these supported mate- rials during answering queries. Some studies [196] have revealed that LLMs can still provide unfaithful answers even with additional references. Therefore, the reliability of the conclusive answers might be lower compared to the ranking results provided by traditional IR systems. It is essential to investigate the influence of these references on the generation process, thereby improving the credibility of reader-based novel IR systems.
2308.07107#107
2308.07107#109
2308.07107
[ "2305.03195" ]
2308.07107#109
Large Language Models for Information Retrieval: A Survey
20 # 8.5 Search Agent With the outstanding performance of LLMs, the patterns of searching may completely change from traditional IR systems to autonomous search agents. In Section 7, we have discussed many existing works that utilize a static or dynamic pipeline to autonomously browse the web. These works are believed to be the pioneering works of the new searching paradigm. However, there is still plenty of room for further improvements. â ¢ Enhancing the Trustworthiness of LLMs. When LLMs are enabled to browse the web, it is important to ensure the validity of retrieved documents. Otherwise, the unfaithful information may increase the LLMsâ
2308.07107#108
2308.07107#110
2308.07107
[ "2305.03195" ]
2308.07107#110
Large Language Models for Information Retrieval: A Survey
â hallucinationâ prob- lem. Besides, even if the gathered information has high quality, it remains unclear whether they are really used for synthesizing responses. A potential strategy to address this issue is enabling LLMs to autonomously validate the documents they scrape. This self-validation process could incorporate mechanisms for assessing the credibility and accuracy of the information within these documents. â ¢ Mitigating Bias and Offensive Content in LLMs. The pres- ence of biases and offensive content within LLM outputs is a pressing concern. This issue primarily stems from biases in- herent in the training data and will be amplified by the low- quality information gathered from the web. Achieving this requires a multi-faceted approach, including improvements in training data, algorithmic adjustments, and continuous monitoring for bias and inappropriate content that LLMs collect and generate.
2308.07107#109
2308.07107#111
2308.07107
[ "2305.03195" ]
2308.07107#111
Large Language Models for Information Retrieval: A Survey
# 8.6 Evaluation LLMs have attracted significant attention in the field of IR due to their strong ability in context understanding and text generation. To validate the effectiveness of LLM-enhanced IR approaches, it is crucial to develop appropriate evalua- tion metrics. Given the growing significance of readers as integral components of IR systems, the evaluation should consider two aspects: assessing ranking performance and evaluating generation performance. â ¢ Generation-oriented ranking evaluation. Traditional eval- uation metrics for ranking primarily focus on comparing the retrieval results of IR models with ground-truth (rele- vance) labels. Typical metrics include precision, recall, mean reciprocal rank (MRR) [221], mean average precision (MAP), and normalized discounted cumulative gain (nDCG) [222]. These metrics measure the alignment between ranking re- sults and human preference on using these results. Nev- ertheless, these metrics may fall short in capturing a doc- umentâ s role in the generation of passages or answers, as their relevance to the query alone might not adequately reflect this aspect. This effect could be leveraged as a means to evaluate the usefulness of documents more comprehen- sively. A formal and rigorous evaluation metric for ranking that centers on generation quality has yet to be defined.
2308.07107#110
2308.07107#112
2308.07107
[ "2305.03195" ]