id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.07107#112
Large Language Models for Information Retrieval: A Survey
â ¢ Text generation evaluation. The wide application of LLMs in IR has led to a notable enhancement in their generation capability. Consequently, there is an imperative demand for novel evaluation strategies to effectively evaluate the per- formance of passage or answer generation. Previous evalu- ation metrics for text generation have several limitations, including: (1) Dependency on lexical matching: methods such as BLEU [223] or ROUGE [224] primarily evaluate the quality of generated outputs based on n-gram matching. This approach cannot account for lexical diversity and con- textual semantics. As a result, models may favor generating common phrases or sentence structures rather than produc- ing creative and novel content. (2) Insensitivity to subtle differences: existing evaluation methods may be insensitive to subtle differences in generated outputs. For example, if a generated output has minor semantic differences from the reference answer but is otherwise similar, traditional meth- ods might overlook these nuanced distinctions. (3) Lack of ability to evaluate factuality: LLMs are prone to generating â hallucinationâ problems [225â 228].
2308.07107#111
2308.07107#113
2308.07107
[ "2305.03195" ]
2308.07107#113
Large Language Models for Information Retrieval: A Survey
The hallucinated texts can closely resemble the oracle texts in terms of vocabulary usage, sentence structures, and patterns, while having non- factual content. Existing methods are hard to identify such problems, while the incorporation of additional knowledge sources such as knowledge bases or reference texts could potentially aid in addressing this challenge. # 8.7 Bias Since ChatGPT was released, LLMs have drawn much at- tention from both academia and industry. The wide appli- cations of LLMs have led to a notable increase in content on the Internet that is not authored by humans but rather generated by these language models. However, as LLMs may hallucinate and generate non-factual texts, the increas- ing number of LLM-generated contents also brings worries that these contents may provide fictitious information for users across IR systems. More severely, researchers [229, 230] show that some modules in IR systems such as retriever and reranker, especially those based on neural models, may pre- fer LLM-generated documents, since their topics are more consistent and the perplexity of them are lower compared with human-written documents. The authors refer to this phenomenon as the â source biasâ towards LLM-generated text. It is challenging but necessary to consider how to build IR systems free from this category of bias. 9 CONCLUSION In this survey, we have conducted a thorough exploration of the transformative impact of LLMs on IR across various dimensions. We have organized existing approaches into distinct categories based on their functions: query rewrit- ing, retrieval, reranking, and reader modules. In the do- main of query rewriting, LLMs have demonstrated their effectiveness in understanding ambiguous or multi-faceted queries, enhancing the accuracy of intent identification. In the context of retrieval, LLMs have improved retrieval accu- racy by enabling more nuanced matching between queries and documents, considering context as well. Within the reranking realm, LLM-enhanced models consider more fine- grained linguistic nuances when re-ordering results. The incorporation of reader modules in IR systems represents a significant step towards generating comprehensive re- sponses instead of mere document lists. The integration of LLMs into IR systems has brought about a fundamental change in how users engage with information and knowl- edge. From query rewriting to retrieval, reranking, and
2308.07107#112
2308.07107#114
2308.07107
[ "2305.03195" ]
2308.07107#114
Large Language Models for Information Retrieval: A Survey
21 reader modules, LLMs have enriched each aspect of the IR process with advanced linguistic comprehension, semantic representation, and context-sensitive handling. As this field continues to progress, the journey of LLMs in IR portends a future characterized by more personalized, precise, and user-centric search encounters. This survey focuses on reviewing recent studies of ap- plying LLMs to different IR components and using LLMs as search agents. Beyond this, a more significant problem brought by the appearance of LLMs is: is the conventional IR framework necessary in the era of LLMs? For example, traditional IR aims to return a ranking list of documents that are relevant to issued queries. However, the devel- opment of generative language models has introduced a novel paradigm: the direct generation of answers to input questions. Furthermore, according to a recent perspective paper [53], IR might evolve into a fundamental service for diverse systems. For example, in a multi-agent simulation system [231], an IR component can be used for memory recall. This implies that there will be many new challenges in future IR. REFERENCES [1] Y. Wu, W. Wu, C. Xing, M. Zhou, and Z. Li, â
2308.07107#113
2308.07107#115
2308.07107
[ "2305.03195" ]
2308.07107#115
Large Language Models for Information Retrieval: A Survey
Se- quential matching network: A new architecture for multi-turn response selection in retrieval-based chat- bots,â in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, R. Barzilay and M. Kan, Eds. Association for Computational Linguistics, 2017, pp. 496â 505. [2] H. Shum, X. He, and D. Li, â From eliza to xiaoice: challenges and opportunities with social chatbots,â
2308.07107#114
2308.07107#116
2308.07107
[ "2305.03195" ]
2308.07107#116
Large Language Models for Information Retrieval: A Survey
Frontiers Inf. Technol. Electron. Eng., vol. 19, no. 1, pp. 10â 26, 2018. V. Karpukhin, B. Oguz, S. Min, P. S. H. Lewis, L. Wu, S. Edunov, D. Chen, and W. Yih, â Dense passage retrieval for open-domain question answering,â in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, B. Webber, T. Cohn, Y. He, and Y. Liu, Eds. Association for Computational Linguis- tics, 2020, pp. 6769â 6781. R. Datta, D. Joshi, J. Li, and J. Z. Wang, â
2308.07107#115
2308.07107#117
2308.07107
[ "2305.03195" ]
2308.07107#117
Large Language Models for Information Retrieval: A Survey
Image re- trieval: Ideas, influences, and trends of the new age,â ACM Comput. Surv., vol. 40, no. 2, pp. 5:1â 5:60, 2008. C. Yuan, W. Zhou, M. Li, S. Lv, F. Zhu, J. Han, and S. Hu, â Multi-hop selector network for multi- turn response selection in retrieval-based chatbots,â in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3- 7, 2019, K.
2308.07107#116
2308.07107#118
2308.07107
[ "2305.03195" ]
2308.07107#118
Large Language Models for Information Retrieval: A Survey
Inui, J. Jiang, V. Ng, and X. Wan, Eds. Association for Computational Linguistics, 2019, pp. 111â 120. Y. Zhu, J. Nie, K. Zhou, P. Du, and Z. Dou, â Content selection network for document-grounded retrieval- based chatbots,â in Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, [3] [5] [7]
2308.07107#117
2308.07107#119
2308.07107
[ "2305.03195" ]
2308.07107#119
Large Language Models for Information Retrieval: A Survey
Virtual Event, March 28 - April 1, 2021, Proceedings, Part I, ser. Lecture Notes in Computer Science, D. Hiem- stra, M. Moens, J. Mothe, R. Perego, M. Potthast, and Springer, 2021, pp. F. Sebastiani, Eds., vol. 12656. 755â 769. Y. Zhu, J. Nie, K. Zhou, P. Du, H. Jiang, and Z. Dou, â
2308.07107#118
2308.07107#120
2308.07107
[ "2305.03195" ]
2308.07107#120
Large Language Models for Information Retrieval: A Survey
Proactive retrieval-based chatbots based on relevant knowledge and goals,â in SIGIR â 21: The 44th Inter- national ACM SIGIR Conference on Research and Devel- opment in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, F. Diaz, C. Shah, T. Suel, P. Castells, R. Jones, and T. Sakai, Eds. ACM, 2021, pp. 2000â 2004. [8] H. Qian, Z. Dou, Y. Zhu, Y. Ma, and J. Wen, â
2308.07107#119
2308.07107#121
2308.07107
[ "2305.03195" ]
2308.07107#121
Large Language Models for Information Retrieval: A Survey
Learning implicit user profiles for personalized retrieval-based chatbot,â CoRR, vol. abs/2108.07935, 2021. Y. Qu, Y. Ding, J. Liu, K. Liu, R. Ren, W. X. Zhao, D. Dong, H. Wu, and H. Wang, â Rocketqa: An opti- mized training approach to dense passage retrieval for open-domain question answering,â in Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, NAACL-HLT 2021, Online, June 6- 11, 2021, K.
2308.07107#120
2308.07107#122
2308.07107
[ "2305.03195" ]
2308.07107#122
Large Language Models for Information Retrieval: A Survey
Toutanova, A. Rumshisky, L. Zettlemoyer, D. Hakkani-T ¨ur, I. Beltagy, S. Bethard, R. Cotterell, T. Chakraborty, and Y. Zhou, Eds. Association for Computational Linguistics, 2021, pp. 5835â 5847. [10] Y. Arens, C. A. Knoblock, and W. Shen, â
2308.07107#121
2308.07107#123
2308.07107
[ "2305.03195" ]
2308.07107#123
Large Language Models for Information Retrieval: A Survey
Query re- formulation for dynamic information integration,â J. Intell. Inf. Syst., vol. 6, no. 2/3, pp. 99â 130, 1996. J. Huang and E. N. Efthimiadis, â Analyzing and eval- uating query reformulation strategies in web search logs,â in Proceedings of the 18th ACM Conference on Information and Knowledge Management, CIKM 2009, Hong Kong, China, November 2-6, 2009, D.
2308.07107#122
2308.07107#124
2308.07107
[ "2305.03195" ]
2308.07107#124
Large Language Models for Information Retrieval: A Survey
W. Cheung, I. Song, W. W. Chu, X. Hu, and J. Lin, Eds. ACM, 2009, pp. 77â 86. [9] [11] [12] R. F. Nogueira, W. Yang, K. Cho, and J. Lin, â Multi- stage document ranking with BERT,â CoRR, vol. abs/1910.14424, 2019. [13] R. F. Nogueira, Z. Jiang, R. Pradeep, and J. Lin, â
2308.07107#123
2308.07107#125
2308.07107
[ "2305.03195" ]
2308.07107#125
Large Language Models for Information Retrieval: A Survey
Docu- ment ranking with a pretrained sequence-to-sequence model,â in EMNLP (Findings), ser. Findings of ACL, vol. EMNLP 2020. Association for Computational Linguistics, 2020, pp. 708â 718. [14] Y. Zhu, J. Nie, Z. Dou, Z. Ma, X. Zhang, P. Du, X. Zuo, and H. Jiang, â Contrastive learning of user behavior sequence for context-aware document ranking,â in CIKM â
2308.07107#124
2308.07107#126
2308.07107
[ "2305.03195" ]
2308.07107#126
Large Language Models for Information Retrieval: A Survey
21: The 30th ACM International Conference on Information and Knowledge Management, Virtual Event, Queensland, Australia, November 1 - 5, 2021, G. De- martini, G. Zuccon, J. S. Culpepper, Z. Huang, and H. Tong, Eds. ACM, 2021, pp. 2780â 2791. J. Teevan, S. T. Dumais, and E. Horvitz, â
2308.07107#125
2308.07107#127
2308.07107
[ "2305.03195" ]
2308.07107#127
Large Language Models for Information Retrieval: A Survey
Personalizing search via automated analysis of interests and activ- ities,â in SIGIR 2005: Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Salvador, Brazil, August 15-19, 2005, R. A. Baeza-Yates, N. Ziviani, G. Marchionini, A. Moffat, and J. Tait, Eds. ACM, 2005, pp. 449â 456. [15] 22 [16] P. N. Bennett, R. W. White, W. Chu, S. T. Dumais, P. Bailey, F. Borisyuk, and X.
2308.07107#126
2308.07107#128
2308.07107
[ "2305.03195" ]
2308.07107#128
Large Language Models for Information Retrieval: A Survey
Cui, â Modeling the impact of short- and long-term behavior on search personalization,â in The 35th International ACM SIGIR conference on research and development in Information Retrieval, SIGIR â 12, Portland, OR, USA, August 12-16, 2012, W. R. Hersh, J. Callan, Y. Maarek, and M. Sander- son, Eds. ACM, 2012, pp. 185â 194. [17] S. Ge, Z. Dou, Z. Jiang, J. Nie, and J. Wen, â
2308.07107#127
2308.07107#129
2308.07107
[ "2305.03195" ]
2308.07107#129
Large Language Models for Information Retrieval: A Survey
Person- alizing search results using hierarchical RNN with query-aware attention,â in Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM 2018, Torino, Italy, October 22-26, 2018, A. Cuzzocrea, J. Allan, N. W. Paton, D. Sri- vastava, R. Agrawal, A. Z. Broder, M. J. Zaki, K. S. Candan, A. Labrinidis, A. Schuster, and H. Wang, Eds. ACM, 2018, pp. 347â 356. [18] Y. Zhou, Z. Dou, Y. Zhu, and J. Wen, â
2308.07107#128
2308.07107#130
2308.07107
[ "2305.03195" ]
2308.07107#130
Large Language Models for Information Retrieval: A Survey
PSSL: self- supervised learning for personalized search with con- trastive sampling,â in CIKM â 21: The 30th ACM Inter- national Conference on Information and Knowledge Man- agement, Virtual Event, Queensland, Australia, November 1 - 5, 2021, G. Demartini, G. Zuccon, J. S. Culpepper, Z. Huang, and H. Tong, Eds. ACM, 2021, pp. 2749â 2758. J. G. Carbonell and J. Goldstein, â
2308.07107#129
2308.07107#131
2308.07107
[ "2305.03195" ]
2308.07107#131
Large Language Models for Information Retrieval: A Survey
The use of mmr, diversity-based reranking for reordering documents and producing summaries,â in SIGIR â 98: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, August 24-28 1998, Melbourne, Australia, W. B. Croft, A. Moffat, C. J. van Rijsbergen, R. Wilkinson, and J. Zobel, Eds. ACM, 1998, pp. 335â 336. [19] [20] R. Agrawal, S. Gollapudi, A. Halverson, and S.
2308.07107#130
2308.07107#132
2308.07107
[ "2305.03195" ]
2308.07107#132
Large Language Models for Information Retrieval: A Survey
Ieong, â Diversifying search results,â in Proceedings of the Sec- ond International Conference on Web Search and Web Data Mining, WSDM 2009, Barcelona, Spain, February 9-11, 2009, R. Baeza-Yates, P. Boldi, B. A. Ribeiro-Neto, and B. B. Cambazoglu, Eds. ACM, 2009, pp. 5â 14. J. Liu, Z. Dou, X. Wang, S. Lu, and J. Wen, â
2308.07107#131
2308.07107#133
2308.07107
[ "2305.03195" ]
2308.07107#133
Large Language Models for Information Retrieval: A Survey
DVGAN: A minimax game for search result diversification com- bining explicit and implicit features,â in Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, J. X. Huang, Y. Chang, X. Cheng, J. Kamps, V. Murdock, J. Wen, and Y. Liu, Eds.
2308.07107#132
2308.07107#134
2308.07107
[ "2305.03195" ]
2308.07107#134
Large Language Models for Information Retrieval: A Survey
ACM, 2020, pp. 479â 488. [21] [22] Z. Su, Z. Dou, Y. Zhu, X. Qin, and J. Wen, â Modeling intent graph for search result diversification,â in SIGIR â 21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Vir- tual Event, Canada, July 11-15, 2021, F. Diaz, C. Shah, T. Suel, P. Castells, R. Jones, and T. Sakai, Eds. ACM, 2021, pp. 736â 746. J. Hoffmann, T. Cai, E. Rutherford, K. Millican, G. van den Driessche, J. Lespiau, B. Damoc, A. Clark, D. de Las Casas, A. Guy, J. Menick, R. Ring, T. Hennigan, S. Huang, L. Maggiore, C. Jones, A. Cassirer, A. Brock, M. Pa- ganini, G. Irving, O. Vinyals, S. Osindero, K. Si- monyan, J. W. Rae, E. Elsen, and L.
2308.07107#133
2308.07107#135
2308.07107
[ "2305.03195" ]
2308.07107#135
Large Language Models for Information Retrieval: A Survey
Sifre, â Improv- ing language models by retrieving from trillions of tokens,â in International Conference on Machine Learn- ing, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, ser. Proceedings of Machine Learning Research, K. Chaudhuri, S. Jegelka, L. Song, C. Szepesv´ari, G. Niu, and S. Sabato, Eds., vol. 162. PMLR, 2022, pp. 2206â 2240. [24] R. Nakano, J. Hilton, S. Balaji, J. Wu, L. Ouyang, C. Kim, C. Hesse, S. Jain, V. Kosaraju, W. Saun- ders, X. Jiang, K. Cobbe, T. Eloundou, G. Krueger, K. Button, M. Knight, B. Chess, and J. Schulman, â
2308.07107#134
2308.07107#136
2308.07107
[ "2305.03195" ]
2308.07107#136
Large Language Models for Information Retrieval: A Survey
Webgpt: Browser-assisted question-answering with human feedback,â CoRR, vol. abs/2112.09332, 2021. [25] G. Salton and M. McGill, Introduction to Modern Infor- mation Retrieval. McGraw-Hill Book Company, 1984. [26] G. Salton, A. Wong, and C. Yang, â A vector space for automatic indexing,â Commun. ACM, model vol. 18, no. 11, pp. 613â 620, 1975. [27] F. Song and W. B.
2308.07107#135
2308.07107#137
2308.07107
[ "2305.03195" ]
2308.07107#137
Large Language Models for Information Retrieval: A Survey
Croft, â A general language model for information retrieval,â in Proceedings of the 1999 ACM CIKM International Conference on Information and Knowledge Management, Kansas City, Missouri, USA, November 2-6, 1999. ACM, 1999, pp. 316â 321. J. Martineau and T. Finin, â Delta TFIDF: an improved feature space for sentiment analysis,â in Proceedings of the Third International Conference on Weblogs and Social Media, ICWSM 2009, San Jose, California, USA, May 17- 20, 2009, E. Adar, M. Hurst, T. Finin, N. S. Glance, N. Nicolov, and B. L. Tseng, Eds. The AAAI Press, 2009. [28] [29] S. E. Robertson, S. Walker, S. Jones, M. Hancock- Beaulieu, and M. Gatford, â
2308.07107#136
2308.07107#138
2308.07107
[ "2305.03195" ]
2308.07107#138
Large Language Models for Information Retrieval: A Survey
Okapi at TREC-3,â in Proceedings of The Third Text REtrieval Conference, TREC 1994, Gaithersburg, Maryland, USA, November 2-4, 1994, ser. NIST Special Publication, D. K. Harman, Ed., vol. 500-225. National Institute of Standards and Technology (NIST), 1994, pp. 109â 126. J. Guo, Y. Fan, Q. Ai, and W. B.
2308.07107#137
2308.07107#139
2308.07107
[ "2305.03195" ]
2308.07107#139
Large Language Models for Information Retrieval: A Survey
Croft, â A deep relevance matching model for ad-hoc retrieval,â in Proceedings of the 25th ACM International Conference on Information and Knowledge Management, CIKM 2016, In- dianapolis, IN, USA, October 24-28, 2016, S. Mukhopad- hyay, C. Zhai, E. Bertino, F. Crestani, J. Mostafa, J. Tang, L. Si, X. Zhou, Y. Chang, Y. Li, and P. Sondhi, Eds.
2308.07107#138
2308.07107#140
2308.07107
[ "2305.03195" ]
2308.07107#140
Large Language Models for Information Retrieval: A Survey
ACM, 2016, pp. 55â 64. [30] [31] L. Xiong, C. Xiong, Y. Li, K. Tang, J. Liu, P. N. Bennett, J. Ahmed, and A. Overwijk, â Approximate nearest neighbor negative contrastive learning for dense text retrieval,â in 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021.
2308.07107#139
2308.07107#141
2308.07107
[ "2305.03195" ]
2308.07107#141
Large Language Models for Information Retrieval: A Survey
J. Lin, R. F. Nogueira, and A. Yates, Pretrained Trans- formers for Text Ranking: BERT and Beyond, ser. Syn- thesis Lectures on Human Language Technologies. Morgan & Claypool Publishers, 2021. [32] [33] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, â Language models are unsupervised multitask learners,â
2308.07107#140
2308.07107#142
2308.07107
[ "2305.03195" ]
2308.07107#142
Large Language Models for Information Retrieval: A Survey
2019. [34] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, 23 T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. Mc- Candlish, A. Radford, I. Sutskever, and D.
2308.07107#141
2308.07107#143
2308.07107
[ "2305.03195" ]
2308.07107#143
Large Language Models for Information Retrieval: A Survey
Amodei, â Language models are few-shot learners,â in Ad- vances in Neural Information Processing Systems 33: An- nual Conference on Neural Information Processing Sys- tems 2020, NeurIPS 2020, December 6-12, 2020, virtual, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., 2020. [35 [36
2308.07107#142
2308.07107#144
2308.07107
[ "2305.03195" ]
2308.07107#144
Large Language Models for Information Retrieval: A Survey
Izacard, X. Martinet, M. Lachaux, T. Lacroix, B. Rozi`ere, N. Goyal, E. Ham- bro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample, â Llama: Open and efficient foundation language models,â CoRR, vol. abs/2302.13971, 2023. J. Zhang, R. Xie, Y. Hou, W. X. Zhao, L. Lin, and J. Wen, â
2308.07107#143
2308.07107#145
2308.07107
[ "2305.03195" ]
2308.07107#145
Large Language Models for Information Retrieval: A Survey
Recommendation as instruction following: A large language model empowered recommendation approach,â CoRR, vol. abs/2305.07001, 2023. [37] Y. Hou, J. Zhang, Z. Lin, H. Lu, R. Xie, J. J. McAuley, and W. X. Zhao, â Large language models are zero- shot rankers for recommender systems,â CoRR, vol. abs/2305.08845, 2023.
2308.07107#144
2308.07107#146
2308.07107
[ "2305.03195" ]
2308.07107#146
Large Language Models for Information Retrieval: A Survey
[38] Y. Xi, W. Liu, J. Lin, J. Zhu, B. Chen, R. Tang, W. Zhang, R. Zhang, and Y. Yu, â Towards open-world recom- mendation with knowledge augmentation from large language models,â CoRR, vol. abs/2306.10933, 2023. [39] W. Fan, Z. Zhao, J. Li, Y. Liu, X. Mei, Y. Wang, J. Tang, and Q. Li, â
2308.07107#145
2308.07107#147
2308.07107
[ "2305.03195" ]
2308.07107#147
Large Language Models for Information Retrieval: A Survey
Recommender systems in the era of large language models (llms),â CoRR, vol. abs/2307.02046, 2023. [40] S. Wu, O. Irsoy, S. Lu, V. Dabravolski, M. Dredze, S. Gehrmann, P. Kambadur, D. S. Rosenberg, and G. Mann, â Bloomberggpt: A large language model for finance,â
2308.07107#146
2308.07107#148
2308.07107
[ "2305.03195" ]
2308.07107#148
Large Language Models for Information Retrieval: A Survey
CoRR, vol. abs/2303.17564, 2023. J. Li, Y. Liu, W. Fan, X. Wei, H. Liu, J. Tang, and Q. Li, â Empowering molecule discovery for molecule- caption translation with large language models: A chatgpt perspective,â CoRR, vol. abs/2306.06615, 2023. J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler, E. H. Chi, T. Hashimoto, O. Vinyals, P. Liang, J. Dean, and W. Fedus, â
2308.07107#147
2308.07107#149
2308.07107
[ "2305.03195" ]
2308.07107#149
Large Language Models for Information Retrieval: A Survey
Emergent abilities of large language models,â Trans. Mach. Learn. Res., vol. 2022, 2022. [42] [43] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wain- wright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. F. Christiano, J. Leike, and R. Lowe, â Training language mod- els to follow instructions with human feedback,â
2308.07107#148
2308.07107#150
2308.07107
[ "2305.03195" ]
2308.07107#150
Large Language Models for Information Retrieval: A Survey
in NeurIPS, 2022. J. Wei, M. Bosma, V. Y. Zhao, K. Guu, A. W. Yu, B. Lester, N. Du, A. M. Dai, and Q. V. Le, â Fine- tuned language models are zero-shot learners,â in The Tenth International Conference on Learning Repre- sentations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022.
2308.07107#149
2308.07107#151
2308.07107
[ "2305.03195" ]
2308.07107#151
Large Language Models for Information Retrieval: A Survey
J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. H. Chi, Q. V. Le, and D. Zhou, â Chain-of- thought prompting elicits reasoning in large language models,â in NeurIPS, 2022. [46] P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, and G.
2308.07107#150
2308.07107#152
2308.07107
[ "2305.03195" ]
2308.07107#152
Large Language Models for Information Retrieval: A Survey
Neubig, â Pre-train, prompt, and predict: A system- atic survey of prompting methods in natural language processing,â ACM Comput. Surv., vol. 55, no. 9, pp. 195:1â 195:35, 2023. [47] X. Qiu, T. Sun, Y. Xu, Y. Shao, N. Dai, and X. Huang, â Pre-trained models for natural language processing: A survey,â
2308.07107#151
2308.07107#153
2308.07107
[ "2305.03195" ]
2308.07107#153
Large Language Models for Information Retrieval: A Survey
CoRR, vol. abs/2003.08271, 2020. [48] Y. Cao, S. Li, Y. Liu, Z. Yan, Y. Dai, P. S. Yu, and L. Sun, â A comprehensive survey of ai-generated content (AIGC): A history of generative AI from GAN to chatgpt,â CoRR, vol. abs/2303.04226, 2023.
2308.07107#152
2308.07107#154
2308.07107
[ "2305.03195" ]
2308.07107#154
Large Language Models for Information Retrieval: A Survey
J. Li, T. Tang, W. X. Zhao, and J. Wen, â Pretrained language model for text generation: A survey,â in Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, Z. Zhou, Ed. ijcai.org, 2021, pp. 4492â 4499. [49] [50] Q. Dong, L. Li, D. Dai, C. Zheng, Z. Wu, B. Chang, X. Sun, J. Xu, L. Li, and Z. Sui, â A survey for in-context learning,â
2308.07107#153
2308.07107#155
2308.07107
[ "2305.03195" ]
2308.07107#155
Large Language Models for Information Retrieval: A Survey
CoRR, vol. abs/2301.00234, 2023. J. Huang and K. C. Chang, â Towards reasoning in large language models: A survey,â in Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, A. Rogers, J. L. Boyd- Graber, and N. Okazaki, Eds. Association for Com- putational Linguistics, 2023, pp. 1049â 1065. [51] [52] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong, Y. Du, C. Yang, Y. Chen, Z. Chen, J. Jiang, R. Ren, Y. Li, X. Tang, Z. Liu, P. Liu, J. Nie, and J. Wen, â
2308.07107#154
2308.07107#156
2308.07107
[ "2305.03195" ]
2308.07107#156
Large Language Models for Information Retrieval: A Survey
A survey of large language models,â CoRR, vol. abs/2303.18223, 2023. [53] Q. Ai, T. Bai, Z. Cao, Y. Chang, J. Chen, Z. Chen, Z. Cheng, S. Dong, Z. Dou, F. Feng, S. Gao, J. Guo, X. He, Y. Lan, C. Li, Y. Liu, Z. Lyu, W. Ma, J. Ma, Z. Ren, P. Ren, Z. Wang, M. Wang, J. Wen, L. Wu, X. Xin, J. Xu, D. Yin, P. Zhang, F. Zhang, W. Zhang, M. Zhang, and X. Zhu, â
2308.07107#155
2308.07107#157
2308.07107
[ "2305.03195" ]
2308.07107#157
Large Language Models for Information Retrieval: A Survey
Information retrieval meets large language models: A strategic report from chi- nese IR community,â CoRR, vol. abs/2307.09751, 2023. [54] X. Liu and W. B. Croft, â Statistical language modeling for information retrieval,â Annu. Rev. Inf. Sci. Technol., vol. 39, no. 1, pp. 1â 31, 2005. [55] B. Mitra and N.
2308.07107#156
2308.07107#158
2308.07107
[ "2305.03195" ]
2308.07107#158
Large Language Models for Information Retrieval: A Survey
Craswell, â Neural models for infor- mation retrieval,â CoRR, vol. abs/1705.01509, 2017. [56] W. X. Zhao, J. Liu, R. Ren, and J. Wen, â Dense text retrieval based on pretrained language models: A survey,â CoRR, vol. abs/2211.14876, 2022. [57] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, â
2308.07107#157
2308.07107#159
2308.07107
[ "2305.03195" ]
2308.07107#159
Large Language Models for Information Retrieval: A Survey
Exploring the limits of transfer learning with a unified text-to- text transformer,â J. Mach. Learn. Res., vol. 21, pp. 140:1â 140:67, 2020. [58] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, â Deep contex- tualized word representations,â in Proceedings of the 2018 Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana,
2308.07107#158
2308.07107#160
2308.07107
[ "2305.03195" ]
2308.07107#160
Large Language Models for Information Retrieval: A Survey
24 [59] USA, June 1-6, 2018, Volume 1 (Long Papers), M. A. Walker, H. Ji, and A. Stent, Eds. Association for Computational Linguistics, 2018, pp. 2227â 2237. J. Devlin, M. Chang, K. Lee, and K. Toutanova, â BERT: pre-training of deep bidirectional transformers for language understanding,â in Proceedings of the 2019 Conference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), J. Burstein, C. Doran, and T. Solorio, Eds. Association for Computational Linguistics, 2019, pp. 4171â 4186. [60] A.
2308.07107#159
2308.07107#161
2308.07107
[ "2305.03195" ]
2308.07107#161
Large Language Models for Information Retrieval: A Survey
Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, â Attention is all you need,â in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, and R. Garnett, Eds., 2017, pp. 5998â 6008. [61] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mo- hamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, â
2308.07107#160
2308.07107#162
2308.07107
[ "2305.03195" ]
2308.07107#162
Large Language Models for Information Retrieval: A Survey
BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension,â in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, D. Jurafsky, J. Chai, N. Schluter, and J. R. Tetreault, Eds. Association for Computational Linguistics, 2020, pp. 7871â
2308.07107#161
2308.07107#163
2308.07107
[ "2305.03195" ]
2308.07107#163
Large Language Models for Information Retrieval: A Survey
7880. J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei, â Scaling laws for neural language mod- els,â CoRR, vol. abs/2001.08361, 2020. [63] A. Clark, D. de Las Casas, A. Guy, A. Mensch, M. Paganini, J. Hoffmann, B. Damoc, B. A. Hecht- man, T. Cai, S. Borgeaud, G. van den Driessche, E. Rutherford, T. Hennigan, M. J. Johnson, A. Cassirer, C. Jones, E. Buchatskaya, D. Budden, L.
2308.07107#162
2308.07107#164
2308.07107
[ "2305.03195" ]
2308.07107#164
Large Language Models for Information Retrieval: A Survey
Sifre, S. Osin- dero, O. Vinyals, M. Ranzato, J. W. Rae, E. Elsen, K. Kavukcuoglu, and K. Simonyan, â Unified scaling laws for routed language models,â in International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, ser. Proceedings of Machine Learning Research, K. Chaudhuri, S. Jegelka, L. Song, C. Szepesv´ari, G. Niu, and S. Sabato, Eds., vol. 162. PMLR, 2022, pp. 4057â 4086. [64] L. Dong, N. Yang, W. Wang, F. Wei, X. Liu, Y. Wang, J. Gao, M. Zhou, and H. Hon, â Unified language model pre-training for natural language understand- ing and generation,â in Advances in Neural Informa- tion Processing Systems 32: Annual Conference on Neu- ral Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, H.
2308.07107#163
2308.07107#165
2308.07107
[ "2305.03195" ]
2308.07107#165
Large Language Models for Information Retrieval: A Survey
M. Wallach, H. Larochelle, A. Beygelzimer, F. dâ Alch´e- Buc, E. B. Fox, and R. Garnett, Eds., 2019, pp. 13 042â 13 054. [65] L. Xue, N. Constant, A. Roberts, M. Kale, R. Al- Rfou, A. Siddhant, A. Barua, and C. Raffel, â
2308.07107#164
2308.07107#166
2308.07107
[ "2305.03195" ]
2308.07107#166
Large Language Models for Information Retrieval: A Survey
mt5: A massively multilingual pre-trained text-to-text the 2021 Confer- transformer,â in Proceedings of ence of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, K. Toutanova, A. Rumshisky, L. Zettlemoyer, D. Hakkani-T ¨ur, I. Beltagy, S. Bethard, R. Cotterell, T. Chakraborty, and Y. Zhou, Eds. Association for Computational Linguistics, 2021, pp. 483â 498. [66] V. Sanh, A. Webson, C. Raffel, S. H. Bach, L. Sutawika, Z. Alyafeai, A. Chaffin, A. Stiegler, A. Raja, M. Dey, M. S. Bari, C. Xu, U. Thakker, S. S. Sharma, E. Szczechla, T. Kim, G. Chhablani, N. V. Nayak, D. Datta, J. Chang, M. T. Jiang, H. Wang, M. Man- ica, S. Shen, Z. X. Yong, H. Pandey, R. Bawden, T. Wang, T. Neeraj, J. Rozen, A. Sharma, A. Santilli, T. F´evry, J. A. Fries, R. Teehan, T. L. Scao, S. Bider- man, L. Gao, T. Wolf, and A. M. Rush, â
2308.07107#165
2308.07107#167
2308.07107
[ "2305.03195" ]
2308.07107#167
Large Language Models for Information Retrieval: A Survey
Multitask prompted training enables zero-shot task generaliza- tion,â in The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. [67] H. Bao, L. Dong, F. Wei, W. Wang, N. Yang, X. Liu, Y. Wang, J. Gao, S. Piao, M. Zhou, and H. Hon, â
2308.07107#166
2308.07107#168
2308.07107
[ "2305.03195" ]
2308.07107#168
Large Language Models for Information Retrieval: A Survey
Unilmv2: Pseudo-masked language models for uni- fied language model pre-training,â in Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, ser. Proceedings of Machine Learning Research, vol. 119. PMLR, 2020, pp. 642â 652. [68] A. Zeng, X. Liu, Z. Du, Z. Wang, H. Lai, M. Ding, Z. Yang, Y. Xu, W. Zheng, X. Xia, W. L. Tam, Z. Ma, Y. Xue, J. Zhai, W. Chen, Z. Liu, P. Zhang, Y. Dong, and J. Tang, â
2308.07107#167
2308.07107#169
2308.07107
[ "2305.03195" ]
2308.07107#169
Large Language Models for Information Retrieval: A Survey
GLM-130B: an open bilingual pre-trained model,â in The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. [69] W. Fedus, B. Zoph, and N. Shazeer, â Switch trans- formers: Scaling to trillion parameter models with simple and efficient sparsity,â J. Mach. Learn.
2308.07107#168
2308.07107#170
2308.07107
[ "2305.03195" ]
2308.07107#170
Large Language Models for Information Retrieval: A Survey
Res., vol. 23, pp. 120:1â 120:39, 2022. [70] Z. Yang, Z. Dai, Y. Yang, J. G. Carbonell, R. Salakhutdi- nov, and Q. V. Le, â Xlnet: Generalized autoregressive pretraining for language understanding,â in Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, H.
2308.07107#169
2308.07107#171
2308.07107
[ "2305.03195" ]
2308.07107#171
Large Language Models for Information Retrieval: A Survey
M. Wallach, H. Larochelle, A. Beygelz- imer, F. dâ Alch´e-Buc, E. B. Fox, and R. Garnett, Eds., 2019, pp. 5754â 5764. [71] S. Black, S. Biderman, E. Hallahan, Q. Anthony, L. Gao, L. Golding, H. He, C. Leahy, K. McDonell, J. Phang, M. Pieler, U. S. Prashanth, S. Purohit, L. Reynolds, J. Tow, B. Wang, and S. Weinbach, â
2308.07107#170
2308.07107#172
2308.07107
[ "2305.03195" ]
2308.07107#172
Large Language Models for Information Retrieval: A Survey
Gpt- neox-20b: An open-source autoregressive language model,â CoRR, vol. abs/2204.06745, 2022. J. W. Rae, S. Borgeaud, T. Cai, K. Millican, J. Hoff- mann, H. F. Song, J. Aslanides, S. Henderson, R. Ring, S. Young, E. Rutherford, T. Hennigan, J. Menick, A. Cassirer, R. Powell, G. van den Driessche, L. A. Hendricks, M. Rauh, P. Huang, A. Glaese, J. Welbl, S. Dathathri, S. Huang, J. Uesato, J. Mellor, I. Higgins, A. Creswell, N. McAleese, A. Wu, E. Elsen, S. M. [72] 25
2308.07107#171
2308.07107#173
2308.07107
[ "2305.03195" ]
2308.07107#173
Large Language Models for Information Retrieval: A Survey
Jayakumar, E. Buchatskaya, D. Budden, E. Suther- land, K. Simonyan, M. Paganini, L. Sifre, L. Martens, X. L. Li, A. Kuncoro, A. Nematzadeh, E. Gribovskaya, D. Donato, A. Lazaridou, A. Mensch, J. Lespiau, M. Tsimpoukelli, N. Grigorev, D. Fritz, T. Sotti- aux, M. Pajarskas, T. Pohlen, Z. Gong, D. Toyama, C. de Masson dâ Autume, Y. Li, T. Terzi, V. Mikulik, I. Babuschkin, A.
2308.07107#172
2308.07107#174
2308.07107
[ "2305.03195" ]
2308.07107#174
Large Language Models for Information Retrieval: A Survey
Clark, D. de Las Casas, A. Guy, C. Jones, J. Bradbury, M. J. Johnson, B. A. Hecht- man, L. Weidinger, I. Gabriel, W. Isaac, E. Lockhart, S. Osindero, L. Rimell, C. Dyer, O. Vinyals, K. Ayoub, J. Stanway, L. Bennett, D. Hassabis, K. Kavukcuoglu, and G.
2308.07107#173
2308.07107#175
2308.07107
[ "2305.03195" ]
2308.07107#175
Large Language Models for Information Retrieval: A Survey
Irving, â Scaling language models: Methods, analysis & insights from training gopher,â CoRR, vol. abs/2112.11446, 2021. [73] N. Du, Y. Huang, A. M. Dai, S. Tong, D. Lepikhin, Y. Xu, M. Krikun, Y. Zhou, A. W. Yu, O. Firat, B. Zoph, L. Fedus, M. P. Bosma, Z. Zhou, T. Wang, Y. E. Wang, K. Webster, M. Pellat, K. Robinson, K. S. Meier- Hellstern, T. Duke, L. Dixon, K. Zhang, Q. V. Le, Y. Wu, Z. Chen, and C. Cui, â
2308.07107#174
2308.07107#176
2308.07107
[ "2305.03195" ]
2308.07107#176
Large Language Models for Information Retrieval: A Survey
Glam: Efficient scaling of language models with mixture-of-experts,â in In- ternational Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, ser. Pro- ceedings of Machine Learning Research, K. Chaud- huri, S. Jegelka, L. Song, C. Szepesv´ari, G. Niu, and S. Sabato, Eds., vol. 162. PMLR, 2022, pp. 5547â 5569. [74] Y. Sun, S. Wang, S. Feng, S. Ding, C. Pang, J. Shang, J. Liu, X. Chen, Y. Zhao, Y. Lu, W. Liu, Z. Wu, W. Gong, J. Liang, Z. Shang, P. Sun, W. Liu, X. Ouyang, D. Yu, H. Tian, H. Wu, and H. Wang, â
2308.07107#175
2308.07107#177
2308.07107
[ "2305.03195" ]
2308.07107#177
Large Language Models for Information Retrieval: A Survey
ERNIE 3.0: Large-scale knowledge enhanced pre-training for lan- guage understanding and generation,â CoRR, vol. abs/2107.02137, 2021. [75] S. Zhang, S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, C. Dewan, M. T. Diab, X. Li, X. V. Lin, T. Mihaylov, M. Ott, S. Shleifer, K. Shuster, D. Simig, P. S. Koura, A. Sridhar, T. Wang, and L.
2308.07107#176
2308.07107#178
2308.07107
[ "2305.03195" ]
2308.07107#178
Large Language Models for Information Retrieval: A Survey
Zettlemoyer, â OPT: open pre-trained transformer language mod- els,â CoRR, vol. abs/2205.01068, 2022. J. Hall, N. Shazeer, A. Kulshreshtha, H. Cheng, A. Jin, T. Bos, L. Baker, Y. Du, Y. Li, H. Lee, H. S. Zheng, A. Ghafouri, M. Menegali, Y. Huang, M. Krikun, D. Lepikhin, J. Qin, D. Chen, Y. Xu, Z. Chen, A. Roberts, M. Bosma, Y. Zhou, C. Chang, I. Krivokon, W. Rusch, M. Pick- ett, K. S. Meier-Hellstern, M. R. Morris, T. Doshi, R. D. Santos, T. Duke, J. Soraker, B. Zevenbergen, V. Prabhakaran, M. Diaz, B. Hutchinson, K. Olson, A. Molina, E. Hoffman-John, J. Lee, L. Aroyo, R. Ra- jakumar, A. Butryna, M. Lamm, V. Kuzmina, J. Fenton, A. Cohen, R. Bernstein, R. Kurzweil, B. A. y Arcas, C. Cui, M. Croak, E. H. Chi, and Q. Le, â Lamda: Language models for dialog applications,â CoRR, vol. abs/2201.08239, 2022. [77] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R.
2308.07107#177
2308.07107#179
2308.07107
[ "2305.03195" ]
2308.07107#179
Large Language Models for Information Retrieval: A Survey
Pope, J. Bradbury, J. Austin, M. Is- ard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghe- mawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Do- han, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pil- lai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee, Z. Zhou, X. Wang, B. Saeta, M. Diaz, O. Firat, M. Catasta, J. Wei, K. Meier- Hellstern, D. Eck, J. Dean, S. Petrov, and N. Fiedel, â
2308.07107#178
2308.07107#180
2308.07107
[ "2305.03195" ]
2308.07107#180
Large Language Models for Information Retrieval: A Survey
Palm: Scaling language modeling with pathways,â CoRR, vol. abs/2204.02311, 2022. [78] T. L. Scao, A. Fan, C. Akiki, E. Pavlick, S. Ilic, D. Hess- low, R. Castagn´e, A. S. Luccioni, F. Yvon, M. Gall´e, J. Tow, A. M. Rush, S. Biderman, A. Webson, P. S. Ammanamanchi, T. Wang, B. Sagot, N. Muennighoff, A. V. del Moral, O. Ruwase, R. Bawden, S. Bekman, A. McMillan-Major, I. Beltagy, H. Nguyen, L. Saulnier, S. Tan, P. O. Suarez, V. Sanh, H. Laurenc¸on, Y. Jer- nite, J. Launay, M. Mitchell, C. Raffel, A. Gokaslan, A. Simhi, A. Soroa, A. F. Aji, A. Alfassy, A. Rogers, A. K. Nitzav, C. Xu, C. Mou, C. Emezue, C. Klamm, C. Leong, D. van Strien, D. I. Adelani, and et al., â BLOOM: A 176b-parameter open-access multilingual language model,â CoRR, vol. abs/2211.05100, 2022. [79] A. Lewkowycz, A. Andreassen, D. Dohan, E. Dyer, H. Michalewski, V. V. Ramasesh, A. Slone, C. Anil, I. Schlag, T. Gutman-Solo, Y. Wu, B. Neyshabur, G. Gur-Ari, and V. Misra, â
2308.07107#179
2308.07107#181
2308.07107
[ "2305.03195" ]
2308.07107#181
Large Language Models for Information Retrieval: A Survey
Solving quantitative rea- soning problems with language models,â in NeurIPS, 2022. [80] OpenAI, â GPT-4 technical report,â CoRR, vol. abs/2303.08774, 2023. J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai, E. Rutherford, D. de Las Casas, L. A. Hen- dricks, J. Welbl, A. Clark, T. Hennigan, E. Noland, K. Millican, G. van den Driessche, B. Damoc, A. Guy, S. Osindero, K. Simonyan, E. Elsen, J. W. Rae, O. Vinyals, and L.
2308.07107#180
2308.07107#182
2308.07107
[ "2305.03195" ]
2308.07107#182
Large Language Models for Information Retrieval: A Survey
Sifre, â Training compute-optimal large language models,â CoRR, vol. abs/2203.15556, 2022. [81] [82] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, â Lora: Low-rank adaptation of large language models,â in The Tenth International Con- ference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. [83] X. L. Li and P. Liang, â
2308.07107#181
2308.07107#183
2308.07107
[ "2305.03195" ]
2308.07107#183
Large Language Models for Information Retrieval: A Survey
Prefix-tuning: Optimizing continuous prompts for generation,â in Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1- 6, 2021, C. Zong, F. Xia, W. Li, and R. Navigli, Eds. Association for Computational Linguistics, 2021, pp. 4582â 4597. [84] B. Lester, R. Al-Rfou, and N. Constant, â
2308.07107#182
2308.07107#184
2308.07107
[ "2305.03195" ]
2308.07107#184
Large Language Models for Information Retrieval: A Survey
The power of scale for parameter-efficient prompt tuning,â in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, M. Moens, X. Huang, L. Specia, and S. W. Yih, Eds. Association for Computational Linguistics, 2021, 26 pp. 3045â 3059. [85] T. Dettmers, A. Pagnoni, A. Holtzman, and L. Zettle- moyer, â Qlora: Efficient finetuning of quantized llms,â CoRR, vol. abs/2305.14314, 2023. [86] L. Wang, N. Yang, and F. Wei, â
2308.07107#183
2308.07107#185
2308.07107
[ "2305.03195" ]
2308.07107#185
Large Language Models for Information Retrieval: A Survey
Query2doc: Query expansion with large language models,â pp. 9414â 9423, 2023. [87] N. A. Jaleel, J. Allan, W. B. Croft, F. Diaz, L. S. Larkey, X. Li, M. D. Smucker, and C. Wade, â Umass at TREC 2004: Novelty and HARD,â in Proceedings of the Thirteenth Text REtrieval Conference, TREC 2004, Gaithersburg, Maryland, USA, November 16-19, 2004, ser. NIST Special Publication, E. M. Voorhees and L. P. Buckland, Eds., vol. 500-261.
2308.07107#184
2308.07107#186
2308.07107
[ "2305.03195" ]
2308.07107#186
Large Language Models for Information Retrieval: A Survey
National Institute of Standards and Technology (NIST), 2004. [88] D. Metzler and W. B. Croft, â Latent concept expan- sion using markov random fields,â in SIGIR 2007: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Amsterdam, The Netherlands, July 23-27, 2007, W. Kraaij, A. P. de Vries, C. L. A. Clarke, N. Fuhr, and N. Kando, Eds. ACM, 2007, pp. 311â 318. [89] C. Zhai and J. D. Lafferty, â
2308.07107#185
2308.07107#187
2308.07107
[ "2305.03195" ]
2308.07107#187
Large Language Models for Information Retrieval: A Survey
Model-based feedback in the language modeling approach to information retrieval,â in Proceedings of the 2001 ACM CIKM Inter- national Conference on Information and Knowledge Man- agement, Atlanta, Georgia, USA, November 5-10, 2001. ACM, 2001, pp. 403â 410. [90] D. Metzler and W. B. Croft, â A markov random field model for term dependencies,â in SIGIR 2005: Pro- ceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Salvador, Brazil, August 15-19, 2005, R.
2308.07107#186
2308.07107#188
2308.07107
[ "2305.03195" ]
2308.07107#188
Large Language Models for Information Retrieval: A Survey
A. Baeza-Yates, N. Ziviani, G. Marchionini, A. Moffat, and J. Tait, Eds. ACM, 2005, pp. 472â 479. [91] X. Wang, C. Macdonald, N. Tonellotto, and I. Ounis, â Pseudo-relevance feedback for multiple representa- tion dense retrieval,â in ICTIR â 21: The 2021 ACM SI- GIR International Conference on the Theory of Information Retrieval, Virtual Event, Canada, July 11, 2021, F. Hasibi, Y.
2308.07107#187
2308.07107#189
2308.07107
[ "2305.03195" ]
2308.07107#189
Large Language Models for Information Retrieval: A Survey
Fang, and A. Aizawa, Eds. ACM, 2021, pp. 297â 306. [92] Z. Zheng, K. Hui, B. He, X. Han, L. Sun, and A. Yates, â BERT-QE: contextualized query expansion for doc- ument re-ranking,â in Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, ser. Findings of ACL, T. Cohn, Y. He, and Y. Liu, Eds., vol. EMNLP 2020. Association for Computational Linguistics, 2020, pp. 4718â
2308.07107#188
2308.07107#190
2308.07107
[ "2305.03195" ]
2308.07107#190
Large Language Models for Information Retrieval: A Survey
4728. [93] F. Diaz, B. Mitra, and N. Craswell, â Query expansion with locally-trained word embeddings,â in Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics, 2016. [94] S. Kuzi, A. Shtok, and O. Kurland, â
2308.07107#189
2308.07107#191
2308.07107
[ "2305.03195" ]
2308.07107#191
Large Language Models for Information Retrieval: A Survey
Query expan- sion using word embeddings,â in Proceedings of the 25th ACM International Conference on Information and Knowledge Management, CIKM 2016, Indianapolis, IN, USA, October 24-28, 2016, S. Mukhopadhyay, C. Zhai, E. Bertino, F. Crestani, J. Mostafa, J. Tang, L. Si, X. Zhou, Y. Chang, Y. Li, and P. Sondhi, Eds.
2308.07107#190
2308.07107#192
2308.07107
[ "2305.03195" ]
2308.07107#192
Large Language Models for Information Retrieval: A Survey
ACM, 2016, pp. 1929â 1932. [95] K. Mao, Z. Dou, F. Mo, J. Hou, H. Chen, and H. Qian, â Large language models know your contextual search intent: A prompting framework for conversational search,â pp. 1211â 1225, 2023. I. Mackie, I. Sekulic, S. Chatterjee, J. Dalton, and F.
2308.07107#191
2308.07107#193
2308.07107
[ "2305.03195" ]
2308.07107#193
Large Language Models for Information Retrieval: A Survey
Crestani, â GRM: generative relevance modeling us- ing relevance-aware sample estimation for document retrieval,â CoRR, vol. abs/2306.09938, 2023. [96] [97] K. Srinivasan, K. Raman, A. Samanta, L. Liao, L. Bertelli, and M. Bendersky, â QUILL: query intent with large language models using retrieval augmen- tation and multi-stage distillation,â in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: EMNLP 2022 - Industry Track, Abu Dhabi, UAE, December 7 - 11, 2022, Y.
2308.07107#192
2308.07107#194
2308.07107
[ "2305.03195" ]
2308.07107#194
Large Language Models for Information Retrieval: A Survey
Li and A. Lazaridou, Eds. Association for Computational Linguistics, 2022, pp. 492â 501. J. Feng, C. Tao, X. Geng, T. Shen, C. Xu, G. Long, D. Zhao, and D. Jiang, â Knowledge refinement via in- teraction between search engines and large language models,â CoRR, vol. abs/2305.07402, 2023.
2308.07107#193
2308.07107#195
2308.07107
[ "2305.03195" ]
2308.07107#195
Large Language Models for Information Retrieval: A Survey
I. Mackie, S. Chatterjee, and J. Dalton, â Generative and pseudo-relevant feedback for sparse, dense and learned sparse retrieval,â CoRR, vol. abs/2305.07477, 2023. [100] X. Ma, Y. Gong, P. He, H. Zhao, and N. Duan, â Query rewriting for retrieval-augmented large lan- guage models,â CoRR, vol. abs/2305.14283, 2023. [101] L. Gao, X. Ma, J. Lin, and J. Callan, â
2308.07107#194
2308.07107#196
2308.07107
[ "2305.03195" ]
2308.07107#196
Large Language Models for Information Retrieval: A Survey
Precise zero-shot dense retrieval without relevance labels,â CoRR, vol. abs/2212.10496, 2022. [102] R. Jagerman, H. Zhuang, Z. Qin, X. Wang, and M. Ben- dersky, â Query expansion by prompting large lan- guage models,â CoRR, vol. abs/2305.03653, 2023. [103] Y. Tang, R. Qiu, and X. Li, â Prompt-based effec- tive input reformulation for legal case retrieval,â
2308.07107#195
2308.07107#197
2308.07107
[ "2305.03195" ]
2308.07107#197
Large Language Models for Information Retrieval: A Survey
in Databases Theory and Applications - 34th Australasian Database Conference, ADC 2023, Melbourne, VIC, Aus- tralia, November 1-3, 2023, Proceedings, ser. Lecture Notes in Computer Science, Z. Bao, R. Borovica-Gajic, R. Qiu, F. M. Choudhury, and Z. Yang, Eds., vol. 14386. Springer, 2023, pp. 87â 100. [104] F. Ye, M. Fang, S. Li, and E.
2308.07107#196
2308.07107#198
2308.07107
[ "2305.03195" ]
2308.07107#198
Large Language Models for Information Retrieval: A Survey
Yilmaz, â Enhanc- ing conversational search: Large language model- aided informative query rewriting,â arXiv preprint arXiv:2310.09716, 2023. [105] C. Huang, C. Hsu, T. Hsu, C. Li, and Y. Chen, â CON- VERSER: few-shot conversational dense retrieval with synthetic data generation,â in Proceedings of the 24th Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL 2023, Prague, Czechia, September 11 - 15, 2023, D.
2308.07107#197
2308.07107#199
2308.07107
[ "2305.03195" ]
2308.07107#199
Large Language Models for Information Retrieval: A Survey
Schlangen, S. Stoyanchev, S. Joty, O. Dusek, C. Kennington, and M. Alikhani, Eds. Association for Computational Linguistics, 2023, pp. 381â 387. [106] M. Li, H. Zhuang, K. Hui, Z. Qin, J. Lin, R. Jager- man, X. Wang, and M. Bendersky, â
2308.07107#198
2308.07107#200
2308.07107
[ "2305.03195" ]
2308.07107#200
Large Language Models for Information Retrieval: A Survey
Generate, filter, and fuse: Query expansion via multi-step keyword generation for zero-shot neural rankers,â CoRR, vol. 27 abs/2311.09175, 2023. [107] A. Anand, V. V, V. Setty, and A. Anand, â Context aware query rewriting for text rankers using LLM,â CoRR, vol. abs/2308.16753, 2023. [108] T. Shen, G. Long, X. Geng, C. Tao, T. Zhou, and D. Jiang, â Large language models are strong zero-shot retriever,â
2308.07107#199
2308.07107#201
2308.07107
[ "2305.03195" ]
2308.07107#201
Large Language Models for Information Retrieval: A Survey
CoRR, vol. abs/2304.14233, 2023. [109] M. Alaofi, L. Gallagher, M. Sanderson, F. Scholer, and P. Thomas, â Can generative llms create query variants for test collections? an exploratory study,â in Proceed- ings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2023, Taipei, Taiwan, July 23-27, 2023, H. Chen, W. E. Duh, H. Huang, M. P. Kato, J. Mothe, and B. Poblete, Eds.
2308.07107#200
2308.07107#202
2308.07107
[ "2305.03195" ]
2308.07107#202
Large Language Models for Information Retrieval: A Survey
ACM, 2023, pp. 1869â 1873. [110] W. Yu, D. Iter, S. Wang, Y. Xu, M. Ju, S. Sanyal, C. Zhu, M. Zeng, and M. Jiang, â Generate rather than retrieve: Large language models are strong context generators,â in The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023.
2308.07107#201
2308.07107#203
2308.07107
[ "2305.03195" ]
2308.07107#203
Large Language Models for Information Retrieval: A Survey
OpenReview.net, 2023. [111] T. Nguyen, M. Rosenberg, X. Song, J. Gao, S. Tiwary, R. Majumder, and L. Deng, â MS MARCO: A human generated machine reading comprehension dataset,â in CoCo@NIPS, ser. CEUR Workshop Proceedings, vol. 1773. CEUR-WS.org, 2016. [112] T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. P. Parikh, C. Alberti, D. Epstein, I. Polosukhin, J. Devlin, K. Lee, K. Toutanova, L. Jones, M. Kelcey, M. Chang, A. M. Dai, J. Uszkoreit, Q. Le, and S.
2308.07107#202
2308.07107#204
2308.07107
[ "2305.03195" ]
2308.07107#204
Large Language Models for Information Retrieval: A Survey
Petrov, â Natural questions: a benchmark for question answer- ing research,â Trans. Assoc. Comput. Linguistics, vol. 7, pp. 452â 466, 2019. [113] W. Peng, G. Li, Y. Jiang, Z. Wang, D. Ou, X. Zeng, D. Xu, T. Xu, and E. Chen, â Large language model based long-tail query rewriting in taobao search,â
2308.07107#203
2308.07107#205
2308.07107
[ "2305.03195" ]
2308.07107#205
Large Language Models for Information Retrieval: A Survey
CoRR, vol. abs/2311.03758, 2023. [114] Z. Du, Y. Qian, X. Liu, M. Ding, J. Qiu, Z. Yang, and J. Tang, â GLM: general language model pretraining with autoregressive blank infilling,â in Proceedings of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, S.
2308.07107#204
2308.07107#206
2308.07107
[ "2305.03195" ]
2308.07107#206
Large Language Models for Information Retrieval: A Survey
Muresan, P. Nakov, and A. Villavicencio, Eds. Association for Computa- tional Linguistics, 2022, pp. 320â 335. [115] A. Yang, B. Xiao, B. Wang, B. Zhang, C. Bian, C. Yin, C. Lv, D. Pan, D. Wang, D. Yan, F. Yang, F. Deng, F. Wang, F. Liu, G. Ai, G. Dong, H. Zhao, H. Xu, H. Sun, H. Zhang, H. Liu, J. Ji, J. Xie, J. Dai, K. Fang, L. Su, L. Song, L. Liu, L. Ru, L. Ma, M. Wang, M. Liu, M. Lin, N. Nie, P. Guo, R. Sun, T. Zhang, T. Li, T. Li, W. Cheng, W. Chen, X. Zeng, X. Wang, X. Chen, X. Men, X. Yu, X. Pan, Y. Shen, Y. Wang, Y. Li, Y. Jiang, Y. Gao, Y. Zhang, Z. Zhou, and Z. Wu, â Baichuan 2:
2308.07107#205
2308.07107#207
2308.07107
[ "2305.03195" ]
2308.07107#207
Large Language Models for Information Retrieval: A Survey
Open large-scale language models,â CoRR, vol. abs/2309.10305, 2023. [116] J. Bai, S. Bai, Y. Chu, Z. Cui, K. Dang, X. Deng, Y. Fan, W. Ge, Y. Han, F. Huang, B. Hui, L. Ji, M. Li, J. Lin, R. Lin, D. Liu, G. Liu, C. Lu, K. Lu, J. Ma, R. Men, X. Ren, X. Ren, C. Tan, S. Tan, J. Tu, P. Wang, S. Wang, W. Wang, S. Wu, B. Xu, J. Xu, A. Yang, H. Yang, J. Yang, S. Yang, Y. Yao, B. Yu, H. Yuan, Z. Yuan, J. Zhang, X. Zhang, Y. Zhang, Z. Zhang, C. Zhou, J. Zhou, X. Zhou, and T. Zhu, â Qwen technical report,â
2308.07107#206
2308.07107#208
2308.07107
[ "2305.03195" ]
2308.07107#208
Large Language Models for Information Retrieval: A Survey
CoRR, vol. abs/2309.16609, 2023. [117] D. Alexander, W. Kusa, and A. P. de Vries, â ORCAS- I: queries annotated with intent using weak supervi- sion,â in SIGIR â 22: The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, July 11 - 15, 2022, E. Amig ´o, P. Castells, J. Gonzalo, B. Carterette, J. S. Culpepper, and G.
2308.07107#207
2308.07107#209
2308.07107
[ "2305.03195" ]
2308.07107#209
Large Language Models for Information Retrieval: A Survey
Kazai, Eds. ACM, 2022, pp. 3057â 3066. [118] K. D. Dhole, R. Chandradevan, and E. Agichtein, â An interactive query generation assistant using llm-based prompt modification and user feedback,â CoRR, vol. abs/2311.11226, 2023. [119] O. Weller, K. Lo, D. Wadden, D. J. Lawrie, B. V. Durme, A. Cohan, and L. Soldaini, â
2308.07107#208
2308.07107#210
2308.07107
[ "2305.03195" ]
2308.07107#210
Large Language Models for Information Retrieval: A Survey
When do generative query and document expansions fail? A comprehen- sive study across methods, retrievers, and datasets,â CoRR, vol. abs/2309.08541, 2023. [120] L. H. Bonifacio, H. Abonizio, M. Fadaee, and R. F. Nogueira, â Inpars: Data augmentation for informa- tion retrieval using large language models,â CoRR, vol. abs/2202.05144, 2022.
2308.07107#209
2308.07107#211
2308.07107
[ "2305.03195" ]
2308.07107#211
Large Language Models for Information Retrieval: A Survey
[121] G. Ma, X. Wu, P. Wang, Z. Lin, and S. Hu, â Pre- training with large language model-based document expansion for dense passage retrieval,â CoRR, vol. abs/2308.08285, 2023. [122] V. Jeronymo, L. H. Bonifacio, H. Abonizio, M. Fadaee, R. de Alencar Lotufo, J. Zavrel, and R. F. Nogueira, â
2308.07107#210
2308.07107#212
2308.07107
[ "2305.03195" ]