doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.06921 | 66 | [21] Mariam Mahdaoui, Said Nouh, My Seddiq ELKASMI Alaoui, and Mounir Sadiq. 2022. Comparative study between automatic hint generation approaches in Intelligent Programming Tutors. Procedia Computer Science 198 (2022), 391â396. https://doi.org/10.1016/j.procs.2021.12.259 Jessica McBroom, Irena Koprinska, and Kalina Yacef. 2022. A Survey of Auto- mated Programming Hint Generation: The HINTS Framework. Comput. Surveys 54, 8 (Nov. 2022), 1â27. https://doi.org/10.1145/3469885
[23] Nhan Nguyen and Sarah Nadi. 2022. An empirical evaluation of GitHub copilotâs code suggestions. In Proceedings of the 19th International Conference on Mining Software Repositories. ACM, Pittsburgh Pennsylvania, 1â5. https://doi.org/10. 1145/3524842.3528470 | 2308.06921#66 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 66 | Paradigm Type Method Supervised Rerankers Enc-only [140] Enc-dec Dec-only [131], [144], [145] [13], [141], [142], [143] Unsupervised Rerankers Pointwise [146], [147], [148], [149], [150], [151] Listwise Pairwise [152], [153], [154] [155], [156] Data Augmentation - [157], [158], [159], [160], [161], [162]
with lower inference time demands), the potential mismatch between LLM-generated texts and real user queries could impact retrieval effectiveness. Furthermore, as LLMs usu- ally lack domain-specific knowledge, they need to be fine- tuned on task-specific datasets before applying them to downstream tasks. Therefore, developing efficient strategies to fine-tune these LLMs with numerous parameters emerges as a key concern.
# 5 RERANKER | 2308.07107#66 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 66 | 14
# OctoPack: Instruction Tuning Code Large Language Models
Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, et al. Codexglue: A machine learning benchmark dataset for code understanding and generation. arXiv preprint arXiv:2102.04664, 2021.
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. Wizardcoder: Empowering code large language models with evol-instruct. arXiv preprint arXiv:2306.08568, 2023.
Aman Madaan, Alexander Shypula, Uri Alon, Milad Hashemi, Parthasarathy Ranganathan, Yiming Yang, Graham Neubig, and Amir Yazdanbakhsh. Learning performance-improving code edits. arXiv preprint arXiv:2302.07867, 2023a. | 2308.07124#66 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.06921 | 67 | [24] Chinedu Wilfred Okonkwo and Abejide Ade-Ibijola. 2021. Python-Bot: A Chatbot for Teaching Python Programming. Engineering Letters 29 (02 2021), 25â34. [25] Chinedu Wilfred Okonkwo and Abejide Ade-Ibijola. 2022. Revision-Bot: A IAENG
Chatbot for Studying Past Questions in Introductory Programming. International Journal of Computer Science 49, 3 (2022).
[26] Zachary A. Pardos and Shreya Bhandari. 2023. Learning gain differences between | 2308.06921#67 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 67 | # 5 RERANKER
Reranker, as the second-pass document filter in IR, aims to rerank a document list retrieved by the retriever (e.g., BM25) based on the query-document relevance. Based on the usage of LLMs, the existing LLM-based reranking methods can be divided into three paradigms: utilizing LLMs as super- vised rerankers, utilizing LLMs as unsupervised rerankers, and utilizing LLMs for training data augmentation. These paradigms are summarized in Table 5 and will be elaborated upon in the following sections. Recall that we will use the term document to refer to the text retrieved in general IR sce- narios, including instances such as passages (e.g., passages in MS MARCO passage ranking dataset [111]).
# 5.1 Utilizing LLMs as Supervised Rerankers
Supervised fine-tuning is an important step in applying pre-trained LLMs to a reranking task. Due to the lack of awareness of ranking during pre-training, LLMs cannot appropriately measure the query-document relevance and fully understand the reranking tasks. By fine-tuning LLMs on task-specific ranking datasets, such as the MS MARCO passage ranking dataset [111], which includes signals of
12 | 2308.07107#67 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 67 | Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023b.
Sewon Min, Mike Lewis, Luke Zettlemoyer, and Hannaneh Hajishirzi. MetaICL: Learning to learn in context. Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2022. URL https://arxiv.org/abs/2110.15943.
Martin Monperrus, Matias Martinez, He Ye, Fernanda Madeiral, Thomas Durieux, and Zhongxing Yu. Megadiff: A dataset of 600k java source code changes categorized by diff size. arXiv preprint arXiv:2108.04631, 2021.
Niklas Muennighoff. Sgpt: Gpt sentence embeddings for semantic search. arXiv preprint arXiv:2202.08904, 2022. | 2308.07124#67 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.06921 | 68 | [26] Zachary A. Pardos and Shreya Bhandari. 2023. Learning gain differences between
ChatGPT and human tutor generated algebra hints. arXiv:2302.06871 [cs.CY] James Prather, Paul Denny, Juho Leinonen, Brett A Becker, Ibrahim Albluwi, Michael E Caspersen, Michelle Craig, Hieke Keuning, Natalie Kiesler, Tobias Kohn, et al. 2023. Transformed by Transformers: Navigating the AI Coding Revolution for Computing Education: An ITiCSE Working Group Conducted by Humans. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 2. 561â562. James Prather, Brent N. Reeves, Paul Denny, Brett A. Becker, Juho Leinonen, Andrew Luxton-Reilly, Garrett Powell, James Finnie-Ansley, and Eddie Antonio Santos. 2023. "Itâs Weird That it Knows What I Want": Usability and Interactions with Copilot for Novice Programmers. arXiv:2304.02491 [cs.HC]
27
28
[29] Margot Rutgers. 2021. Duckbot: A chatbot to assist students in programming tutorials. Masterâs thesis. University of Twente. | 2308.06921#68 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 68 | 12
both relevance and irrelevance, LLMs can adjust their pa- rameters to yield better performance in the reranking tasks. Based on the backbone model structure, we can catego- rize existing supervised rerankers as: (1) encoder-only, (2) encoder-decoder, and (3) decoder-only.
# 5.1.1 Encoder-only
The encoder-based rerankers represent a significant turn- ing point in applying LLMs to document ranking tasks. They demonstrate how some pre-trained language models (e.g., BERT [59]) can be finetuned to deliver highly ac- curate relevance predictions. A representative approach is monoBERT [140], which transforms a query-document pair into a sequence â[CLS] query [SEP] document [SEP]â as the model input and calculates the relevance score by feeding the â[CLS]â representation into a linear layer. The reranking model is optimized based on the cross-entropy loss.
# 5.1.2 Encoder-Decoder | 2308.07107#68 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 68 | Niklas Muennighoff. Sgpt: Gpt sentence embeddings for semantic search. arXiv preprint arXiv:2202.08904, 2022.
Niklas Muennighoff, Nouamane Tazi, Loïc Magne, and Nils Reimers. Mteb: Massive text embedding benchmark. arXiv preprint arXiv:2210.07316, 2022a. doi: 10.48550/ARXIV.2210.07316. URL https://arxiv.org/abs/2210.07316.
Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. Crosslingual generalization through multitask finetuning. arXiv preprint arXiv:2211.01786, 2022b.
Niklas Muennighoff, Alexander M Rush, Boaz Barak, Teven Le Scao, Aleksandra Piktus, Nouamane Tazi, Sampo Pyysalo, Thomas Wolf, and Colin Raffel. Scaling data-constrained language models. arXiv preprint arXiv:2305.16264, 2023. | 2308.07124#68 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.06921 | 69 | 27
28
[29] Margot Rutgers. 2021. Duckbot: A chatbot to assist students in programming tutorials. Masterâs thesis. University of Twente.
[30] Sami Sarsa, Paul Denny, Arto Hellas, and Juho Leinonen. 2022. Automatic Gen- eration of Programming Exercises and Code Explanations Using Large Language Models. In Proceedings of the 2022 ACM Conference on International Computing Education Research V.1. ACM, Lugano and Virtual Event Switzerland, 27â43. https://doi.org/10.1145/3501385.3543957 Jaromir Savelka, Arav Agarwal, Marshall An, Chris Bogart, and Majd Sakr. 2023. Thrilled by Your Progress! Large Language Models (GPT-4) No Longer Struggle to Pass Assessments in Higher Education Programming Course. In Proceedings of the 2023 ACM Conference on International Computing Education Research V.1. ACM. Jaromir Savelka, Arav Agarwal, Christopher Bogart, and Majd Sakr. 2023. Large Language Models (GPT) Struggle to Answer Multiple-Choice Questions about Code. arXiv:2303.08033 [cs.CL] | 2308.06921#69 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 69 | In this field, existing studies mainly formulate document ranking as a generation task and optimize an encoder- decoder-based reranking model [13, 141-143]. Specifically, given the query and the document, reranking models are usually fine-tuned to generate a single token, such as âtrueâ or âfalseâ. During inference, the query-document relevance score is determined based on the logit of the generated token. For example, a T5 model can be fine-tuned to gen- erate classification tokens for relevant or irrelevant query- document pairs [13]. At inference time, a softmax function is applied to the logits of âtrueâ and âfalseâ tokens, and the relevance score is calculated as the probability of the âtrueâ token. The following method [141] involves a multi-view learning approach based on the T5 model. This approach simultaneously considers two tasks: generating classifica- tion tokens for a given query-document pair and generating the corresponding query conditioned on the provided doc- ument. DuoT5 [142] considers a triple (q, d;,d;) as the input of the T5 model and is fine-tuned to generate | 2308.07107#69 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 69 | Ansong Ni, Srini Iyer, Dragomir Radev, Veselin Stoyanov, Wen-tau Yih, Sida Wang, and Xi Victoria Lin. Lever: Learning to verify language-to-code generation with execution. In International Conference on Machine Learning, pp. 26106â26128. PMLR, 2023.
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. Codegen: An open large language model for code with multi-turn program synthesis. arXiv preprint arXiv:2203.13474, 2022.
Erik Nijkamp, Hiroaki Hayashi, Caiming Xiong, Silvio Savarese, and Yingbo Zhou. Codegen2: Lessons for training llms on programming and natural languages. arXiv preprint arXiv:2305.02309, 2023. | 2308.07124#69 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.06921 | 70 | [33] Haoye Tian, Weiqi Lu, Tsz On Li, Xunzhu Tang, Shing-Chi Cheung, Jacques Klein, and Tegawendé F. Bissyandé. 2023. Is ChatGPT the Ultimate Programming Assistant â How far is it? arXiv:2304.11938 [cs.SE] James Walden, Nicholas Caporusso, and Ludiana Atnafu. 2022. A Chatbot for Teaching Secure Programming. In Proceedings of the EDSIG Conference ISSN, Vol. 2473. 4901. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. arXiv:2201.11903 [cs.CL] | 2308.06921#70 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 70 | ument. DuoT5 [142] considers a triple (q, d;,d;) as the input of the T5 model and is fine-tuned to generate token âtrueâ if document d; is more relevant to query q; than document dj, and âfalseâ otherwise. During inference, for each document d;, it enumerates all other documents d; and uses global aggregation functions to generate the relevance score s; for document d; (¢.g., 8; = dj Pi,j, Where p;,; represents the probability of generating âtrueâ when taking (q,di,dj;) as the model input). | 2308.07107#70 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 70 | Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. Show your work: Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114, 2021. URL https://openreview.net/forum?id=iedYJm92o0a.
OpenAI. Gpt-4 technical report, 2023.
Gabriel Orlanski, Kefan Xiao, Xavier Garcia, Jeffrey Hui, Joshua Howland, Jonathan Malmaud, Jacob Austin, Rishah Singh, and Michele Catasta. Measuring the impact of programming language distribution. arXiv preprint arXiv:2302.01973, 2023.
15
# OctoPack: Instruction Tuning Code Large Language Models
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. In Conference on Neural Information Processing Systems (NeurIPS), 2022. URL https://arxiv.org/abs/2203.02155. | 2308.07124#70 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.06921 | 71 | [36] Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, Court- ney Biles, Sasha Brown, Zac Kenton, Will Hawkins, Tom Stepleton, Abeba Birhane, Lisa Anne Hendricks, Laura Rimell, William Isaac, Julia Haas, Sean Legassick, Geoffrey Irving, and Iason Gabriel. 2022. Taxonomy of Risks Posed by Language Models. In 2022 ACM Conference on Fairness, Accountability, and Trans- parency (Seoul, Republic of Korea) (FAccT â22). Association for Computing Ma- chinery, New York, NY, USA, 214â229. https://doi.org/10.1145/3531146.3533088 [37] Terry Yue Zhuo, Yujin Huang, Chunyang Chen, and Zhenchang Xing. 2023. Red teaming ChatGPT via Jailbreaking: Bias, Robustness, Reliability and Toxicity. arXiv:2301.12867 [cs.CL] | 2308.06921#71 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Computing educators face significant challenges in providing timely support
to students, especially in large class settings. Large language models (LLMs)
have emerged recently and show great promise for providing on-demand help at a
large scale, but there are concerns that students may over-rely on the outputs
produced by these models. In this paper, we introduce CodeHelp, a novel
LLM-powered tool designed with guardrails to provide on-demand assistance to
programming students without directly revealing solutions. We detail the design
of the tool, which incorporates a number of useful features for instructors,
and elaborate on the pipeline of prompting strategies we use to ensure
generated outputs are suitable for students. To evaluate CodeHelp, we deployed
it in a first-year computer and data science course with 52 students and
collected student interactions over a 12-week period. We examine students'
usage patterns and perceptions of the tool, and we report reflections from the
course instructor and a series of recommendations for classroom use. Our
findings suggest that CodeHelp is well-received by students who especially
value its availability and help with resolving errors, and that for instructors
it is easy to deploy and complements, rather than replaces, the support that
they provide to students. | http://arxiv.org/pdf/2308.06921 | Mark Liffiton, Brad Sheese, Jaromir Savelka, Paul Denny | cs.CY | null | null | cs.CY | 20230814 | 20230814 | [
{
"id": "2304.03938"
},
{
"id": "2107.03374"
},
{
"id": "2303.08033"
},
{
"id": "2301.12867"
},
{
"id": "2306.05715"
},
{
"id": "2304.11938"
},
{
"id": "2307.16364"
},
{
"id": "2304.02491"
},
{
"id": "2306.02608"
},
{
"id": "2302.03287"
},
{
"id": "2201.11903"
},
{
"id": "2302.06871"
},
{
"id": "2207.10397"
}
] |
2308.07107 | 71 | Although these generative loss-based methods outper- form several strong ranking baselines, they are not op- timal for reranking tasks. This stems from two primary reasons. First, it is commonly expected that a reranking model will yield a numerical relevance score for each query- document pair rather than text tokens. Second, compared to generation losses, it is more reasonable to optimize the reranking model using ranking losses (e.g., RankNet [163]). Recently, RankT5 [143] has directly calculated the relevance score for a query-document pair and optimized the ranking performance with âpairwiseâ or âlistwiseâ ranking losses. An avenue for potential performance enhancement lies in the substitution of the base-sized T5 model with its larger- scale counterpart.
# 5.1.3 Decoder-only | 2308.07107#71 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 71 | Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pp. 311â318, 2002.
Bo Peng, Eric Alcaide, Quentin Anthony, Alon Albalak, Samuel Arcadinho, Huanqi Cao, Xin Cheng, Michael Chung, Matteo Grella, Kranthi Kiran GV, et al. Rwkv: Reinventing rnns for the transformer era. arXiv preprint arXiv:2305.13048, 2023.
Ethan Perez, Douwe Kiela, and Kyunghyun Cho. True few-shot learning with language models. Advances in Neural Information Processing Systems, 34:11054â11070, 2021.
Luiza Amador Pozzobon, Beyza Ermis, Patrick Lewis, and Sara Hooker. On the challenges of using black-box apis for toxicity evaluation in research. In ICLR 2023 Workshop on Trustworthy and Reliable Large-Scale Machine Learning Models, 2023.
Julian Aron Prenner and Romain Robbes. Automatic program repair with openaiâs codex: Evaluating quixbugs. arXiv preprint arXiv:2111.03922, 2021. | 2308.07124#71 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 72 | Recently, there have been some attempts [131, 144, 145] to rerank documents by fine-tuning decoder-only models (such as LLaMA). For example, RankLLaMA [131] pro- poses formatting the query-document pair into a prompt âquery: {query} document: {document} [EOS]â and utilizes the last token representation for relevance calculation. Be- sides, RankingGPT [144] has been proposed to bridge the gap between LLMsâ conventional training objectives and the specific needs of document ranking through two-stage training. The first stage involves continuously pretraining LLMs using a large number of relevant text pairs col- lected from web resources, helping the LLMs to naturally generate queries relevant to the input document. The sec- ond stage focuses on improving the modelâs text ranking performance using high-quality supervised data and well- designed loss functions. Different from these pointwise rerankers [131, 144], Rank-without-GPT [145] proposes to train a listwise reranker that directly outputs a reranked document list. The authors first demonstrate that existing pointwise datasets (such as MS MARCO [111]), which only contain binary query-document labels, are insufficient for training efficient | 2308.07107#72 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 72 | Julian Aron Prenner and Romain Robbes. Runbugrunâan executable dataset for automated program repair. arXiv preprint arXiv:2304.01102, 2023.
Ofir Press, Noah A Smith, and Mike Lewis. Train short, test long: Attention with linear biases enables input length extrapolation. arXiv preprint arXiv:2108.12409, 2021.
Vipul Raheja, Dhruv Kumar, Ryan Koo, and Dongyeop Kang. Coedit: Text editing by task-specific instruction tuning. arXiv preprint arXiv:2305.09857, 2023.
Machel Reid and Graham Neubig. Learning to model editing processes. arXiv preprint arXiv:2205.12374, 2022.
Ehud Reiter. A structured review of the validity of bleu. Computational Linguistics, 44(3):393â401, 2018. | 2308.07124#72 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07124 | 73 | Ehud Reiter. A structured review of the validity of bleu. Computational Linguistics, 44(3):393â401, 2018.
Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. Multitask prompted training enables zero-shot task generalization. International Conference on Learning Representations (ICLR), 2022. URL https://openreview.net/forum?id=9Vrb9D0WI4.
Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. Bloom: A 176b- parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022a. | 2308.07124#73 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 74 | # 5.2 Utilizing LLMs as Unsupervised Rerankers
As the size of LLMs scales up (e.g., exceeding 10 billion pa- rameters), it becomes increasingly difficult to fine-tune the reranking model. Addressing this challenge, recent efforts have attempted to prompt LLMs to directly enhance docu- ment reranking in an unsupervised way. In general, these prompting strategies can be divided into three categories: pointwise, listwise, and pairwise methods. A comprehen- sive exploration of these strategies follows in the subsequent sections.
# 5.2.1 Pointwise methods
The pointwise methods measure the relevance between a query and a single document, and can be categorized into two types: relevance generation [146, 147] and query generation [148â150].
The upper part in Figure 6 (a) shows an example of relevance generation based on a given prompt, where LLMs output a binary label (âYesâ or âNoâ) based on whether the document is relevant to the query. Following [13], the query- document relevance score f (q, d) can be calculated based on the log-likelihood of token âYesâ and âNoâ with a softmax function:
f (q, d) = exp(SY ) exp(SY ) + exp(SN ) , (1) | 2308.07107#74 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 74 | Teven Le Scao, Thomas Wang, Daniel Hesslow, Lucile Saulnier, Stas Bekman, M Saiful Bari, Stella Bideman, Hady Elsahar, Niklas Muennighoff, Jason Phang, et al. What language model to train if you have one million gpu hours? arXiv preprint arXiv:2210.15424, 2022b.
Timo Schick, Jane Dwivedi-Yu, Zhengbao Jiang, Fabio Petroni, Patrick Lewis, Gautier Izacard, Qingfei You, Christoforos Nalmpantis, Edouard Grave, and Sebastian Riedel. Peer: A collaborative language model. arXiv preprint arXiv:2208.11663, 2022.
Natalie Schluter. The limits of automatic summarisation according to rouge. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pp. 41â45. Association for Computational Linguistics, 2017.
Noam M. Shazeer. Fast transformer decoding: One write-head is all you need. arXiv preprint arXiv:1911.02150, 2019. | 2308.07124#74 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 75 | f (q, d) = exp(SY ) exp(SY ) + exp(SN ) , (1)
where SY and SN represent the LLMâs log-likelihood scores of âYesâ and âNoâ respectively. In addition to binary labels, Zhuang et al. [147] propose to incorporate fine-grained relevance labels (e.g., âhighly relevantâ, âsomewhat rele- vantâ and ânot relevantâ) into the prompt, which helps LLMs more effectively differentiate among documents with varying levels of relevance to a query.
13 | 2308.07107#75 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 75 | Noam M. Shazeer. Fast transformer decoding: One write-head is all you need. arXiv preprint arXiv:1911.02150, 2019.
Bo Shen, Jiaxin Zhang, Taihong Chen, Daoguang Zan, Bing Geng, An Fu, Muhan Zeng, Ailun Yu, Jichuan Ji, Jingyang Zhao, Yuenan Guo, and Qianxiang Wang. Pangu-coder2: Boosting large language models for code with ranking feedback, 2023.
16
# OctoPack: Instruction Tuning Code Large Language Models
Ensheng Shi, Yanlin Wang, Lun Du, Junjie Chen, Shi Han, Hongyu Zhang, Dongmei Zhang, and In Proceedings of the 44th Hongbin Sun. On the evaluation of neural code summarization. International Conference on Software Engineering, pp. 1597â1608, 2022.
Disha Shrivastava, Denis Kocetkov, Harm de Vries, Dzmitry Bahdanau, and Torsten Scholak. Repo- fusion: Training code models to understand your repository. arXiv preprint arXiv:2306.10998, 2023a. | 2308.07124#75 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 76 | 13
Document: #{document} Query: #{query} Does the document answer the Prompt Prompt The following are documents related to query #{query}. [1] #{document_1} Rank these documents based on their relevance to the query. { query? LLM LLM Output Output Yes / No [2] > [3] > [1] >... (Relevance Generation) (b) Listwise method Prompt Please write a query based on this document. Document: #{document} Given a query #{query}, which of the following two documents is more relevant to the query? Document 1: #{document_1}; Prompt Document 2: #{document_2} Query: Output Document 1 or Document 2: LLM LLM Output Output #{query} Document 1 / Document 2 (Query Generation) (a) Pointwise method (c) Pairwise method
Fig. 6. Three types of unsupervised reranking methods: (a) pointwise methods that consist of relevance generation (upper) and query generation (lower), (b) listwise methods, and (c) pairwise methods.
As for the query generation shown in the lower part of Figure 6 (a), the query-document relevance score is deter- mined by the average log-likelihood of generating the actual query tokens based on the document:
score = a > log p(qilg<i,d,P), (2) | 2308.07107#76 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 76 | Disha Shrivastava, Hugo Larochelle, and Daniel Tarlow. Repository-level prompt generation for large language models of code. In International Conference on Machine Learning, pp. 31693â31715. PMLR, 2023b.
Marta Skreta, Naruki Yoshikawa, Sebastian Arellano-Rubach, Zhi Ji, Lasse Bjørn Kristensen, Kourosh Darvish, Alán Aspuru-Guzik, Florian Shkurti, and Animesh Garg. Errors are useful prompts: Instruction guided task programming with verifier-assisted iterative prompting. arXiv preprint arXiv:2303.14100, 2023.
Dominik Sobania, Martin Briesch, Carol Hanna, and Justyna Petke. An analysis of the automatic bug fixing performance of chatgpt. arXiv preprint arXiv:2301.08653, 2023. | 2308.07124#76 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 77 | score = a > log p(qilg<i,d,P), (2)
where |q| denotes the token number of query q, d denotes the document, and P represents the provided prompt. The documents are then reranked based on their relevance scores. It has been proven that some LLMs (such as T0) yield significant performance in zero-shot document rerank- ing based on the query generation method [148]. Recently, research [149] has also shown that the LLMs that are pre-trained without any supervised instruction fine-tuning (such as LLaMA) also yield robust zero-shot ranking ability. Although effective, these methods primarily rely on a handcrafted prompt (e.g., âPlease write a query based on this documentâ), which may not be optimal. As prompt is a key factor in instructing LLMs to perform various NLP tasks, it is important to optimize prompt for better per- formance. Along this line, a discrete prompt optimization method Co-Prompt [150] is proposed for better prompt gen- eration in reranking tasks. Besides, PaRaDe [151] proposes a difficulty-based method to select few-show demonstrations to include in the prompt, proving significant improvements compared with zero-shot prompts. | 2308.07107#77 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 77 | Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models, 2022. URL https://arxiv.org/abs/2206.04615.
Simeng Sun, Kalpesh Krishna, Andrew Mattarella-Micke, and Mohit Iyyer. Do long-range language models actually use long-range context? ArXiv, abs/2109.09115, 2021. URL https://api. semanticscholar.org/CorpusID:237572264.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023. | 2308.07124#77 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 78 | query and a document list into the prompt and instruct the LLMs to output the reranked document identifiers. Due to the limited input length of LLMs, it is not feasible to insert all candidate documents into the prompt. To alleviate this issue, these methods employ a sliding window strategy to rerank a subset of candidate documents each time. This strategy involves ranking from back to front using a sliding window, re-ranking only the documents within the window at a time.
Although listwise methods have yielded promising per- formance, they still suffer from some weaknesses. First, according to the experimental results [152], only the GPT-4- based method can achieve competitive performance. When using smaller parameterized language models (e.g., FLAN- UL2 with 20B parameters), listwise methods may produce very few usable results and underperform many supervised methods. Second, the performance of listwise methods is highly sensitive to the document order in the prompt. When the document order is randomly shuffled, listwise methods perform even worse than BM25 [152], revealing positional bias issues in the listwise ranking of LLMs. To alleviate this issue, Tang et al. [154] introduce a permutation self- consistency method, which involves shuffling the list in the prompt and aggregating the generated results to achieve a more accurate and unbiased ranking.
# 5.2.3 Pairwise Methods | 2308.07107#78 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 78 | Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. Galactica: A large language model for science. arXiv preprint arXiv:2211.09085, 2022.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
Lewis Tunstall, Nathan Lambert, Nazneen Rajani, Edward Beeching, Teven Le Scao, Leandro von Werra, Sheon Han, Philipp Schmid, and Alexander Rush. Creating a coding assistant with starcoder. Hugging Face Blog, 2023. https://huggingface.co/blog/starchat. | 2308.07124#78 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 79 | # 5.2.3 Pairwise Methods
Note that these pointwise methods rely on accessing the output logits of LLMs to calculate the query-document relevance scores. As a result, they are not applicable to closed-sourced LLMs, whose API-returned results do not include logits.
# 5.2.2 Listwise Methods
Listwise methods [152, 153] aim to directly rank a list of documents (see Figure 6 (b)). These methods insert the
In pairwise methods [155], LLMs are given a prompt that consists of a query and a document pair (see Figure 6 (c)). Then, they are instructed to generate the identifier of the document with higher relevance. To rerank all candidate documents, aggregation methods like AllPairs are used. AllPairs first generates all possible document pairs and ag- gregates a final relevance score for each document. To speed up the ranking process, efficient sorting algorithms, such as heap sort and bubble sort, are usually employed [155].
14 | 2308.07107#79 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 79 | Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. Conference on Neural Information Processing Systems (NeurIPS), 2019. URL https://arxiv.org/abs/1905.00537.
Ben Wang and Aran Komatsuzaki. Gpt-j-6b: A 6 billion parameter autoregressive language model, 2021.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In International Conference on Learning Representations (ICLR), 2023a. URL https: //openreview.net/forum?id=1PL1NIMMrw.
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instructions. arXiv preprint arXiv:2212.10560, 2022a.
17
# OctoPack: Instruction Tuning Code Large Language Models | 2308.07124#79 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 80 | 14
15 TABLE 6. The comparison between different methods. N denotes the number of documents to rerank. The Complexity, Logits, and Batch represent the computational complexity, whether accesses LLMâs logits, and whether allows batch inference respectively. k is the constant in sliding windows strategy. As for the Performance, we use NDCG@10 as a metric, and the results are calculated by reranking the top 100 documents retrieved by BM25 on TREC-DL2019 and TREC-DL2020. The best model is in bold while the second-best is marked with an underline. The results come from previous study [155]. *Since the parameters of ChatGPT have not been released, its model parameters are based on public estimates [164]. | 2308.07107#80 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 80 | 17
# OctoPack: Instruction Tuning Code Large Language Models
Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al. Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. arXiv preprint arXiv:2204.07705, 2022b.
Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. How far can camels go? exploring the state of instruction tuning on open resources. arXiv preprint arXiv:2306.04751, 2023b.
Yue Wang, Hung Le, Akhilesh Deepak Gotmare, Nghi DQ Bui, Junnan Li, and Steven CH Hoi. Codet5+: Open code large language models for code understanding and generation. arXiv preprint arXiv:2305.07922, 2023c. | 2308.07124#80 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 81 | Methods LLM Size Properties Performance Complexity Logits Batching TREC-DL19 TREC-DL20 Initial Retriever Supervised BM25 monoBERT [140] monoT5 [13] RankT5 [143] - BERT T5 T5 - 340M 220M 3B - - - - - â â â - â â â 50.58 70.50 71.48 71.22 47.96 67.28 66.99 69.49 Unsupervised-Pointwise Unsupervised-Listwise Unsupervised-Pairwise Query Generation [148] FLAN-UL2 Relevance Generation [146] FLAN-UL2 RankGPT3.5 [152] RankGPT4 [152] PRP-Allpair [155] PRP-Heapsort [155] 20B 20B gpt-3.5-turbo 154B* gpt-4 FLAN-UL2 FLAN-UL2 1T* 20B 20B O(N ) O(N ) O(k â N ) O(k â N ) O(N 2) O(N â logN ) â â â â â â â 58.95 64.61 65.80 75.59 72.42 71.88 60.02 65.39 62.91 70.56 70.68 69.43 | 2308.07107#81 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 81 | Zhiruo Wang, Grace Cuenca, Shuyan Zhou, Frank F Xu, and Graham Neubig. Mconala: a benchmark for code generation from multiple natural languages. arXiv preprint arXiv:2203.08388, 2022c.
Zhiruo Wang, Shuyan Zhou, Daniel Fried, and Graham Neubig. Execution-based evaluation for open-domain code generation. arXiv preprint arXiv:2212.10481, 2022d.
Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners. International Conference on Learning Representations (ICLR), 2022. URL https://openreview.net/f orum?id=gEZrGCozdqR.
Jiayi Wei, Greg Durrett, and Isil Dillig. Coeditor: Leveraging contextual changes for multi-round code auto-editing. arXiv preprint arXiv:2305.18584, 2023.
Minghao Wu and Alham Fikri Aji. Style over substance: Evaluation biases for large language models. arXiv preprint arXiv:2307.03025, 2023. | 2308.07124#81 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 82 | These sorting algorithms utilize efficient data structures to compare document pairs selectively and elevate the most relevant documents to the top of the ranking list, which is particularly useful in top-k ranking. Experimental re- sults show the state-of-the-art performance on the standard benchmarks using moderate-size LLMs (e.g., Flan-UL2 with 20B parameters), which are much smaller than those typi- cally employed in listwise methods (e.g., GPT3.5).
Although effective, pairwise methods still suffer from high time complexity. To alleviate the efficiency problem, a setwise approach [156] has been proposed to compare a set of documents at a time and select the most relevant one from them. This approach allows the sorting algorithms (such as heap sort) to compare more than two documents at each step, thereby reducing the total number of comparisons and speeding up the sorting process.
# 5.2.4 Comparison and Discussion
In this part, we will compare different unsupervised meth- ods from various aspects to better illustrate the strengths and weaknesses of each method, which is summarized in Table 6. We choose representative methods [146, 148, 152, 155] in pointwise, listwise and pairwise ranking, and in- clude several supervised methods [13, 140, 143] mentioned in Section 5.1 for performance comparison.
# 5.3 Utilizing LLMs for Training Data Augmentation | 2308.07107#82 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 82 | Chunqiu Steven Xia and Lingming Zhang. Conversational automated program repair. arXiv preprint arXiv:2301.13246, 2023.
Mengzhou Xia, Mikel Artetxe, Chunting Zhou, Xi Victoria Lin, Ramakanth Pasunuru, Danqi Chen, Luke Zettlemoyer, and Ves Stoyanov. Training trajectories of language models across scales. arXiv preprint arXiv:2212.09803, 2022.
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023a.
Frank F Xu, Uri Alon, Graham Neubig, and Vincent Josua Hellendoorn. A systematic evaluation of large language models of code. In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming, pp. 1â10, 2022a.
Shengbin Xu, Yuan Yao, Feng Xu, Tianxiao Gu, and Hanghang Tong. Combining code context and fine-grained code difference for commit message generation. In Proceedings of the 13th Asia-Pacific Symposium on Internetware, pp. 242â251, 2022b. | 2308.07124#82 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 83 | # 5.3 Utilizing LLMs for Training Data Augmentation
Furthermore, in the realm of reranking, researchers have explored the integration of LLMs for training data aug- mentation [157â162]. For example, ExaRanker [157] gener- ates explanations for retrieval datasets using GPT-3.5, and subsequently trains a seq2seq ranking model to generate relevance labels along with corresponding explanations for given query-document pairs. InPars-Light [158] is proposed as a cost-effective method to synthesize queries for docu- ments by prompting LLMs. Contrary to InPars-Light [158], a new dataset ChatGPT-RetrievalQA [159] is constructed by generating synthetic documents based on LLMs in response to user queries. | 2308.07107#83 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 83 | Zhiyang Xu, Ying Shen, and Lifu Huang. Multiinstruct: Improving multi-modal zero-shot learning via instruction tuning, 2023b.
Michihiro Yasunaga and Percy Liang. Break-it-fix-it: Unsupervised learning for program repair. In International Conference on Machine Learning, pp. 11941â11952. PMLR, 2021.
He Ye, Matias Martinez, Thomas Durieux, and Martin Monperrus. A comprehensive study of automatic program repair on the quixbugs benchmark. Journal of Systems and Software, 171: 110825, 2021.
Burak Yetistiren, Isik Ozsoy, and Eray Tuzun. Assessing the quality of github copilotâs code generation. In Proceedings of the 18th International Conference on Predictive Models and Data Analytics in Software Engineering, pp. 62â71, 2022.
18
# OctoPack: Instruction Tuning Code Large Language Models
Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, and Graham Neubig. Learning to mine aligned code and natural language pairs from stack overflow. In International Conference on Mining Software Repositories, MSR, pp. 476â486. ACM, 2018. doi: https://doi.org/10.1145/3196 398.3196408. | 2308.07124#83 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 84 | Recently, many studies [160â162] have also attempted to distill the document ranking capability of LLMs into a specialized model. RankVicuna [160] proposes to use the ranking list of RankGPT3.5 [152] as the gold list to train a 7B parameter Vicuna model. RankZephyr [161] introduces a two-stage training strategy for distillation: initially applying the RankVicuna recipe to train Zephyrγ in the first stage, and then further finetuning it in the second stage with the ranking results from RankGPT4. These two studies not only demonstrate competitive results but also alleviate the issue of ranking results non-reproducibility of black-box LLMs. Besides, researchers [162] have also tried to distill the rank- ing ability of a pairwise ranker, which is computationally demanding, into a simpler but more efficient pointwise ranker. | 2308.07107#84 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 84 | Zheng-Xin Yong, Hailey Schoelkopf, Niklas Muennighoff, Alham Fikri Aji, David Ifeoluwa Adelani, Khalid Almubarak, M Saiful Bari, Lintang Sutawika, Jungo Kasai, Ahmed Baruwa, et al. Bloom+ 1: Adding language support to bloom for zero-shot prompting. arXiv preprint arXiv:2212.09535, 2022.
Hao Yu, Bo Shen, Dezhi Ran, Jiaxin Zhang, Qi Zhang, Yuchi Ma, Guangtai Liang, Ying Li, Tao Xie, and Qianxiang Wang. Codereval: A benchmark of pragmatic code generation with generative pre-trained models. arXiv preprint arXiv:2302.00288, 2023.
Yan Zeng, Hanbo Zhang, Jiani Zheng, Jiangnan Xia, Guoqiang Wei, Yang Wei, Yuchen Zhang, and Tao Kong. What matters in training a gpt4-style language model with multimodal inputs? arXiv preprint arXiv:2307.02469, 2023. | 2308.07124#84 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 85 | The pointwise methods (Query Generation and Rel- evance Generation) judge the relevance of each query- document pair independently, thus offering lower time com- plexity and enabling batch inference. However, compared to other methods, it does not have an advantage in terms of performance. The listwise method yields significant per- formance especially when calling GPT-4, but suffers from expensive API cost and non-reproducibility [160]. Com- pared with the listwise method, the pairwise method shows competitive results based on a much smaller model FLAN- UL2 (20B). Stemming from the necessity to compare an extensive number of document pairs, its primary drawback is low efficiency.
# 5.4 Limitations
Although recent research on utilizing LLMs for document reranking has made significant progress, it still faces some challenges. For example, considering the cost and efficiency, minimizing the number of calls to LLM APIs is a problem worth studying. Besides, while existing studies mainly focus on applying LLMs to open-domain datasets (such as MS- MARCO [111]) or relevance-based text ranking tasks, their adaptability to in-domain datasets [128] and non-standard ranking datasets [165] remains an area that demands more comprehensive exploration. | 2308.07107#85 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 85 | Chunyan Zhang, Junchao Wang, Qinglei Zhou, Ting Xu, Ke Tang, Hairen Gui, and Fudong Liu. A survey of automatic source code summarization. Symmetry, 14(3):471, 2022a.
Fengji Zhang, Bei Chen, Yue Zhang, Jin Liu, Daoguang Zan, Yi Mao, Jian-Guang Lou, and Weizhu Chen. Repocoder: Repository-level code completion through iterative retrieval and generation. arXiv preprint arXiv:2303.12570, 2023a.
Hang Zhang, Xin Li, and Lidong Bing. Video-llama: An instruction-tuned audio-visual language model for video understanding. arXiv preprint arXiv:2306.02858, 2023b.
Jialu Zhang, José Cambronero, Sumit Gulwani, Vu Le, Ruzica Piskac, Gustavo Soares, and Gust Verbruggen. Repairing bugs in python assignments using large language models. arXiv preprint arXiv:2209.14876, 2022b. | 2308.07124#85 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 86 | 6 READER With the impressive capabilities of LLMs in understanding, extracting, and processing textual data, researchers explore expanding the scope of IR systems beyond content ranking to answer generation. In this evolution, a reader module has been introduced to generate answers based on the document corpus in IR systems. By integrating a reader module, IR systems can directly present conclusive passages to users. Compared with providing a list of documents, users can simply comprehend the answering passages instead of ana- lyzing the ranking list in this new paradigm. Furthermore, by repeatedly providing documents to LLMs based on their generating texts, the final generated answers can potentially be more accurate and information-rich than the original retrieved lists. | 2308.07107#86 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 86 | Jiyang Zhang, Sheena Panthaplackel, Pengyu Nie, Junyi Jessy Li, and Milos Gligoric. Coditt5: In 37th IEEE/ACM International Pretraining for source code and natural language editing. Conference on Automated Software Engineering, pp. 1â12, 2022c.
Tianyi Zhang, Tao Yu, Tatsunori Hashimoto, Mike Lewis, Wen-tau Yih, Daniel Fried, and Sida Wang. Coder reviewer reranking for code generation. In International Conference on Machine Learning, pp. 41832â41846. PMLR, 2023c.
Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Zihan Wang, Lei Shen, Andi Wang, Yang Li, et al. Codegeex: A pre-trained model for code generation with multilingual evaluations on humaneval-x. arXiv preprint arXiv:2303.17568, 2023.
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206, 2023a. | 2308.07124#86 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 87 | A naive strategy for implementing this function is to heuristically provide LLMs with documents relevant to the user queries or the previously generated texts to support the following generation. However, this passive approach limits LLMs to merely collecting documents from IR systems without active engagement. An alternative solution is to train LLMs to interact proactively with search engines. For example, LLMs can formulate their own queries instead of relying solely on user queries or generated texts for references. According to the way LLMs utilize IR systems in the reader module, we can categorize them into passive readers and active readers. Each approach has its advantages and challenges for implementing LLM-powered answer generation in IR systems. Furthermore, since the documents provided by upstream IR systems are sometimes too long to directly feed as input for LLMs, some compression modules are proposed to extractively or abstractively compress the retrieved contexts for LLMs to understand and generate an- swers for queries. We will present these reader and compres- sor modules in the following parts and briefly introduce the existing analysis work on retrieval-augmented generation strategy and their applications.
# 6.1 Passive Reader | 2308.07107#87 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 87 | Shuyan Zhou, Uri Alon, Sumit Agarwal, and Graham Neubig. Codebertscore: Evaluating code generation with pretrained models of code. arXiv preprint arXiv:2302.05527, 2023b.
Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. Large language models are human-level prompt engineers. arXiv preprint arXiv:2211.01910, 2022.
Ming Zhu, Aneesh Jain, Karthik Suresh, Roshan Ravindran, Sindhu Tipirneni, and Chandan K Reddy. Xlcost: A benchmark dataset for cross-lingual code intelligence. arXiv preprint arXiv:2206.08474, 2022.
Terry Yue Zhuo. Large language models are state-of-the-art evaluators of code generation. arXiv preprint arXiv:2304.14317, 2023.
19
# OctoPack: Instruction Tuning Code Large Language Models
# APPENDIX
# Contents | 2308.07124#87 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 88 | # 6.1 Passive Reader
To generate answers for users, a straightforward strategy is to supply the retrieved documents according to the queries or previously generated texts from IR systems as inputs to LLMs for creating passages [23, 166â171, 173, 175, 176, 178â 180]. By this means, these approaches use the LLMs and IR systems separately, with LLMs functioning as passive recipients of documents from the IR systems. The strategies for utilizing LLMs within IR systemsâ reader modules can be categorized into the following three groups according to the frequency of retrieving documents for LLMs.
# 6.1.1 Once-Retrieval Reader
To obtain useful references for LLMs to generate responses for user queries, an intuitive way is to retrieve the top doc- uments based on the queries themselves in the beginning. For example, REALM [166] adopts this strategy by directly attending the document contents to the original queries to predict the final answers based on masked language modeling. RAG [167] follows this strategy but applies the generative language modeling paradigm. However, these two approaches only use language models with limited | 2308.07107#88 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 88 | A Contributions B Artifacts C COMMITPACK and COMMITPACKFT Languages D Dataset Creation E Comparing Data Before and After Filtering F Comparing COMMITPACK and The Stack G Pretraining on COMMITPACK H Line Diff Format for Fixing Code I Results on HUMANEVALFIXDOCS J Full Instruction Data Ablations K HUMANEVALFIX Bug Types L Performance Breakdown by HUMANEVALFIX Bug Type M Hyperparameters N Prompts O Examples . O.1 OCTOCODER . . . O.2 GPT-4 . . . . O.3 WizardCoder . . . . O.4 BLOOMZ . . . . . O.5 StarCoder O.6 InstructCodeT5+ . O.7 StarChat-β . . . . O.8 Diff Codegen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 2308.07124#88 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 89 | parameters, such as BERT and BART. Recent approaches such as REPLUG [168] and Atlas [169] have improved them by leveraging LLMs such as GPTs, T5s, and LLaMAs for response generation. To yield better answer generation performances, these models usually fine-tune LLMs on QA tasks. However, due to the limited computing resources, many methods [170, 171, 179] choose to prompt LLMs for generation as they could use larger LMs in this way. Fur- thermore, to improve the quality of the generated answers, several approaches [172, 181] also try to train or prompt the LLMs to generate contexts such as citations or notes in addition to the answers to force LLMs to understand and assess the relevance of retrieved passages to the user queries. Some approaches [180] evaluate the importance of each retrieved reference using policy gradients to indicate which reference is more useful for generating. Besides, researchers explore instruction tuning LLMs such LLaMAs to improve their abilities to generate conclusive passages relying on retrieved knowledge [182, 183].
# 6.1.2 Periodic-Retrieval Reader | 2308.07107#89 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 89 | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P Limitations and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 21 22 28 31 31 31 32 35 35 36 39 39 39 43 43 46 51 52 52 54 54 56 56 | 2308.07124#89 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 90 | However, while generating long conclusive answers, it is shown [23, 173] that only using the references retrieved by the original user intents as in once-retrieval readers may be inadequate. For example, when providing a pas- sage about âBarack Obamaâ, language models may need additional knowledge about his university, which may not be included in the results of simply searching the initial query. In conclusion, language models may need extra references to support the following generation during the generating process, where multiple retrieval processes may be required. To address this, solutions such as RETRO [23] and RALM [173] have emerged, emphasizing the periodic collection of documents based on both the original queries and the concurrently generated texts (triggering a retrieval every n generated tokens). In this manner, when generating the text about the university career of Barack Obama, the LLM can receive additional documents as supplementary materials. This need for additional references highlights the necessity for multiple retrieval iterations to ensure robust- ness in subsequent answer generation. Notably, RETRO [23] introduces a novel approach incorporating cross-attention between the generating texts and the references within | 2308.07107#90 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 90 | Q OCTOBADPACK
20
57
# OctoPack: Instruction Tuning Code Large Language Models
# A CONTRIBUTIONS
Niklas Muennighoff created COMMITPACK and HUMANEVALPACK, wrote most of the paper and lead the project. Qian Liu devised many quality filters, ran SantaCoder ablations, investigated early training decisions and helped edit the paper. Armel Zebaze created the Self-Instruct data and ran numerous ablations. Niklas Muennighoff, Armel Zebaze and Qinkai Zheng created and evaluated OCTOCODER and OCTOGEEX. Binyuan Hui pretrained SantaCoder, made major contributions to the presentation and helped edit the paper. Terry Yue Zhuo ran GPT-4 evaluations and helped edit the paper. Xiangru Tang provided help on several experiments for evaluation and helped edit the paper. Leandro von Werra provided early guidance, suggested many quality filters and added the commit data to StarCoder pretraining. Niklas Muennighoff, Qian Liu, Binyuan Hui, Swayam Singh and Shayne Longpre conducted the data analysis. Shayne Longpre advised the project and made large contributions to the paper.
# B ARTIFACTS | 2308.07124#90 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 91 | in subsequent answer generation. Notably, RETRO [23] introduces a novel approach incorporating cross-attention between the generating texts and the references within the Transformer attention calculation, as opposed to directly embedding references into the input texts of LLMs. Since it involves additional cross-attention modules in the Trans- formerâs structure, RETRO trains this model from scratch. However, these two approaches mainly rely on the suc- cessive n tokens to separate generation and retrieve docu- ments, which may not be semantically continuous and may cause the collected references noisy and useless. To solve this problem, some approaches such as IRCoT [175] also explore retrieving documents for every generated sentence, which is a more complete semantic structure. Furthermore, researchers find that the whole generated passages can be considered as conclusive contexts for current queries and can be used to find more relevant knowledge to gener- ate more thorough answers. Consequently, many recent approaches [174, 184, 185] have also tried to extend this periodic-retrieval paradigm to iteratively using the whole generated passages to retrieve references to re-generate the | 2308.07107#91 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 91 | Other models Diff Codegen 2B (Bradley et al., 2023) InstructCodeT5+ (Wang et al., 2023c) BLOOMZ (Muennighoff et al., 2022b) StarChat-β (Tunstall et al., 2023) CodeGeeX2 (Zheng et al., 2023) SantaCoder (Allal et al., 2023) StarCoder (Li et al., 2023b) WizardCoder (Luo et al., 2023) GPT-4 (OpenAI, 2023) https://hf.co/CarperAI/diff-codegen-2b-v2 https://hf.co/Salesforce/instructcodet5p-16b https://hf.co/bigscience/bloomz https://hf.co/HuggingFaceH4/starchat-beta https://github.com/THUDM/CodeGeeX2 https://hf.co/bigcode/santacoder https://hf.co/bigcode/starcoder https://hf.co/WizardLM/WizardCoder-15B-V1.0 https://openai.com/gpt-4 Data Ablations (Appendix J) - Data Filtered xP3x code StarCoder Self-Instruct Filtered | 2308.07124#91 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 92 | 16
TABLE 7. The comparison of existing representative methods that have a passive reader module. REALM and RAG do not use LLMs, but their frameworks have been widely applied in many following approaches.
Methods Backbone models Where to incorporate retrieval When to retrieve How to use LLMs REALM [166] RAG [167] REPLUG [168] Atlas [169] Lazaridou et al. [170] He et al. [171] Chain-of-Note [172] RALM [173] RETRO [23] ITERGEN [174] IRCoT [175] FLARE [176] Self-RAG [177] BERT BART GPT T5 Gopher GPT LLaMA LLaMA & OPT & GPT Transformer GPT Flan-T5 & GPT GPT LLaMA Input layer Input layer Input layer Input layer Input layer Input layer Input layer Input layer Attention layer Input layer Input layer Input layer Input layer In the beginning In the beginning In the beginning In the beginning In the beginning In the beginning In the beginning During generation (every n tokens) During generation (every n tokens) Training from scratch During generation (every answer) During generation (every sentence) During generation (aperiodic) During generation (aperiodic) Fine-tuning Fine-tuning Fine-tuning Fine-tuning Prompting Prompting Fine-tuning Prompting Prompting Prompting Prompting Fine-tuning | 2308.07107#92 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 92 | https://openai.com/gpt-4 Data Ablations (Appendix J) - Data Filtered xP3x code StarCoder Self-Instruct Filtered OASST Manual selection (Appendix J) https://hf.co/datasets/bigcode/xp3x-octopack https://hf.co/datasets/codeparrot/self-instruct-starcoder https://hf.co/datasets/bigcode/oasst-octopack https://hf.co/datasets/bigcode/co-manual Data Ablations (Appendix J) - Models Self-Instruct (SI) OASST (O) SI + O xP3x + O COMMITPACKFT + O (Formatting) COMMITPACKFT + O (Target loss) COMMITPACKFT + O (Manual) COMMITPACKFT + xP3x + O COMMITPACKFT + xP3x + SI + O https://hf.co/bigcode/starcoder-s https://hf.co/bigcode/starcoder-o https://hf.co/bigcode/starcoder-so https://hf.co/bigcode/starcoder-xo | 2308.07124#92 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 93 | answers, until the iterations reach a pre-defined limita- tion. Particularly, these methods can be regarded as special periodic-retrieval readers that retrieve passages when every answer is (re)-generated. Since the LLMs can receive more comprehensive and relevant references with the iterations increase, these methods that combine retrieval-augmented- generation and generation-augmented-retrieval strategies can generate more accurate answers but consume more computation costs.
# 6.1.3 Aperiodic-Retrieval Reader | 2308.07107#93 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 93 | https://hf.co/bigcode/starcoder-so https://hf.co/bigcode/starcoder-xo https://hf.co/bigcode/starcoder-co-format https://hf.co/bigcode/starcoder-co-target https://hf.co/bigcode/starcoder-co-manual https://hf.co/bigcode/starcoder-cxo https://hf.co/bigcode/starcoder-cxso SantaCoder ablations (Appendix G, Appendix H) Commit format Pretraining Commit format Finetuning Line diff format Finetuning https://hf.co/bigcode/santacoderpack https://hf.co/bigcode/santacoder-cf https://hf.co/bigcode/santacoder-ldf Other datasets COMMITPACK Metadata https://hf.co/datasets/bigcode/commitpackmeta Main artifacts COMMITPACK COMMITPACKFT HUMANEVALPACK OCTOGEEX OCTOCODER https://hf.co/datasets/bigcode/commitpack https://hf.co/datasets/bigcode/commitpackft | 2308.07124#93 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 94 | # 6.1.3 Aperiodic-Retrieval Reader
In the above strategy, the retrieval systems supply docu- ments to LLMs in a periodic manner. However, retrieving documents in a mandatory frequency may mismatch the retrieval timing and can be costly. Recently, FLARE [176] has addressed this problem by automatically determining the timing of retrieval according to the probability of generating texts. Since the probability can serve as an indicator of LLMsâ confidence during text generation [186, 187], a low probability for a generated term could suggest that LLMs require additional knowledge. Specifically, when the proba- bility of a term falls below a predefined threshold, FLARE employs IR systems to retrieve references in accordance with the ongoing generated sentences, while removing these low-probability terms. FLARE adopts this strategy of prompting LLMs for answer generation solely based on the probabilities of generating terms, avoiding the need for fine- tuning while still maintaining effectiveness. Besides, self- RAG [177] tends to solve this problem by training LLMs such as LlaMA to generate specific tokens when they need additional knowledge to support following generations. Another critical model is introduced to judge whether the retrieved references are beneficial for generating.
IR systems in a manner akin to human interaction such as issuing queries to seek information. | 2308.07107#94 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07107 | 95 | IR systems in a manner akin to human interaction such as issuing queries to seek information.
To allow LLMs to actively use search engines, Self- Ask [188] and DSP [189] try to employ few-shot prompts for LLMs, triggering them to search queries when they believe it is required. For example, in a scenario where the query is âWhen was the existing tallest wooden lattice tower built?â, these prompted LLMs can decide to search a query âWhat is the existing tallest wooden lattice towerâ to gather neces- sary references as they find the query cannot be directly answered. Once acquired information about the tower, they can iteratively query IR systems for more details until they determine to generate the final answers instead of asking questions. Notably, these methods involve IR systems to construct a single reasoning chain for LLMs. MRC [190] fur- ther improves these methods by prompting LLMs to explore multiple reasoning chains and subsequently combining all generated answers using LLMs.
# 6.3 Compressor | 2308.07107#95 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 95 | # Table 3: Used and produced artifacts.
21
# OctoPack: Instruction Tuning Code Large Language Models
C COMMITPACK AND COMMITPACKFT LANGUAGES
COMMITPACK Language (â) MB Samples % (MB) Total 3709175.78 57700105 100.0 COMMITPACKFT MB Samples % (MB) 1545.02 702062 100.0
json xml text javascript objective-c++ python c c++ markdown java html yaml go csv php jupyter-notebook gettext-catalog sql unity3d-asset typescript owl ruby c# nix shell perl tex css restructuredtext rust groff ini scala coffeescript haskell swift lua svg gas ocaml erlang makefile asciidoc emacs-lisp scss clojure org common-lisp diff groovy html+erb nesc | 2308.07124#95 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 96 | # 6.3 Compressor
Existing LLMs, especially open-sourced ones, such as LLaMA and Flan-T5, have limited input lengths (usually 4,096 or 8,192 tokens). However, the documents or web pages retrieved by upstream IR systems are usually long. Therefore, it is difficult to concatenate all the retrieved documents and feed them into LLMs to generate answers. Though some approaches manage to solve these problems by aggregating the answers supported by each reference as the final answers, this strategy neglects the potential rela- tions between retrieved passages. A more straightforward way is to directly compress the retrieved documents into short input tokens or even dense vectors [191â194].
We summarize representative passive reader approaches in Table 7, considering various aspects such as the backbone language models, the insertion point for retrieved refer- ences, the timing of using retrieval models, and the tuning strategy employed for LLMs.
# 6.2 Active Reader
However, the passive reader-based approaches separate IR systems and generative language models. This signifies that LLMs can only submissively utilize references provided by IR systems and are unable to interactively engage with the | 2308.07107#96 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 96 | 583293.82 279208.68 270662.6 262824.84 239009.3 234311.56 200876.8 186585.26 171849.95 127103.45 105305.28 100466.64 86444.62 82946.19 74961.64 66854.08 62296.88 56802.76 39535.01 39254.8 36435.46 35830.74 33669.65 33547.92 25109.95 21148.93 17471.11 16306.63 15613.89 15011.3 12020.19 8375.16 8325.96 6795.14 6306.12 5902.72 5763.12 5645.44 5585.38 5355.4 5043.32 4238.51 4138.59 3988.65 3944.94 3523.41 3126.22 2954.9 2586.05 2569.14 2450.68 2439.56 | 2308.07124#96 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 97 | To compress the retrieved references, an intuitive idea is to extract the most useful K sentences from the retrieved documents. LeanContext [191] applies this method and trains a small model by reinforcement learning (RL) to select the top K similar sentences to the queries. The researchers also augment this strategy by using a free open-sourced text reduction method for the rest sentences as a supplement. Instead of using RL-based methods, RECOMP [192] directly uses the probability or the match ratio of the generated answers to the golden answers as signals to build training datasets and tune the compressor model. For example, the sentence corresponding to the highest generating proba17
bility is the positive one while others are negative ones. Furthermore, FILCO [193] applies the âhindsightâ methods, which directly align the prior distribution (the predicted importance probability distribution of sentences without knowing the gold answer) to the posterior distribution (the same distribution of sentences within knowing the gold answer) to tune language models to select sentences.
However, these extractive methods may lose potential intent among all references. Therefore, abstractive methods are proposed to summarize retrieved documents into short but concise summaries for downstream generation. These methods [192, 194] usually distill the summarizing abili- ties of LLMs to small models. For example, TCRA [194] leverages GPT-3.5-turbo to build abstractive compression datasets for MT5 model. | 2308.07107#97 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 97 | 3495038 1923159 1389525 5401937 32227 6189601 2779478 2402294 7645354 3744377 2366841 2592787 1183612 79268 2555419 94000 168327 132772 17867 572136 7458 2928702 923157 221281 1017977 374266 89283 548818 494037 296214 32923 297100 316064 292446 217325 319289 139091 27095 15121 81360 93685 343379 96671 83228 288190 158674 30198 74628 21021 110057 225379 473
15.73 7.53 7.3 7.09 6.44 6.32 5.42 5.03 4.63 3.43 2.84 2.71 2.33 2.24 2.02 1.8 1.68 1.53 1.07 1.06 0.98 0.97 0.91 0.9 0.68 0.57 0.47 0.44 0.42 0.4 0.32 0.23 0.22 0.18 0.17 0.16 0.16 0.15 0.15 0.14 0.14 0.11 0.11 0.11 0.11 0.09 0.08 0.08 0.07 0.07 0.07 0.07 | 2308.07124#97 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 98 | # 6.4 Analysis
With the rapid development of the above reader approaches, many researchers have begun to analyze the characteristics of retrieval-augmented LLMs:
⢠Liu et al. [195] find that the position of the rele- vant/golden reference has significant influences on the final generation performance. The performance is always better when the relevant reference is at the beginning or the end, which indicates the necessity of introducing a ranking module to order the retrieved knowledge.
⢠Ren et al. [196] observe that by applying retrieval augmentation generation strategy, LLMs can have a better awareness of their knowledge boundaries.
⢠Liu et al. [197] analyze different strategies of integrat- ing retrieval systems and LLMs such as concatenate (i.e., concatenating all references for answer generation) and post fusion (i.e., aggregating the answers corresponding to each reference). They also explore several ways of combining these two strategies.
⢠Aksitov et al. [198] demonstrate that there exists an attribution and fluency tradeoff for retrieval-augmented LLMs: with more received references, the attribution of generated answers increases while the fluency decreases. | 2308.07107#98 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 98 | 86.74 23.68 66.66 125.01 0.38 132.68 21.08 14.14 131.15 56.28 48.42 190.88 12.13 0.53 60.22 0.1 0.13 3.74 0.16 14.28 0 195.29 26.84 3.84 66.86 4.99 0.56 9.36 15.73 7.24 0.4 21.04 11.18 16.96 3.31 16.27 1.85 0.25 0.34 0.7 1.19 2.53 1.86 1.97 13.21 5.07 0.27 1.45 1.48 4.17 23.1 0.02
39777 9337 46588 52989 86 56025 8506 4992 62518 20635 20214 114320 5004 375 24791 48 72 2069 101 5868 0 69413 9346 1593 31217 2288 307 5049 6560 2996 192 11360 5040 5513 1389 4849 920 169 193 333 480 960 523 1015 6829 2403 136 778 680 1486 10910 7
22 | 2308.07124#98 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 99 | ⢠Mallen et al. [199] argue that always retrieving ref- erences to support LLMs to generate answers hurts the question-answering performance. The reason is that LLMs themselves may have adequate knowledge while answering questions about popular entities and the retrieved noisy passages may interfere and bias the answering process. To overcome this challenge, they devise a simple strategy that only retrieves references while the popularity of entities in the query is quite low. By this means, the efficacy and efficiency of retrieval-augmented generation both improve.
# 6.5 Applications
Recently, researchers [200â205] have applied the retrieval- augmented generation strategy to areas such as clinical QA, medical QA, and financial QA to enhance LLMs with exter- nal knowledge and to develop domain-specific applications. For example, ATLANTIC [201] adapts Atlas to the scien- tific domain to derive a science QA system. Besides, some approaches [206] also apply techniques in federated learn- ing such as multi-party computation to perform personal retrieval-augmented generation with privacy protection. | 2308.07107#99 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07107 | 100 | to better facilitate the deployment of these retrieval-augmented generation systems, some tools or frameworks are proposed [178, 207, 208]. For example, RETA-LLM [178] breaks down the whole complex gen- eration task into several simple modules in the reader pipeline. These modules include a query rewriting module for refining query intents, a passage extraction module for aligning reference lengths with LLM limitations, and a fact verification module for confirming the absence of fabricated information in the generated answers.
# 6.6 Limitations
Several IR systems applying the retrieval-augmented gen- eration strategy, such as New Bing and Langchain, have already entered commercial use. However, there are also some challenges in this novel retrieval-augmented content generation system. These include challenges such as effec- tive query reformulation, optimal retrieval frequency, cor- rect document comprehension, accurate passage extraction, and effective content summarization. It is crucial to address these challenges to effectively realize the potential of LLMs in this paradigm. | 2308.07107#100 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 100 | dart powershell f#t dm kotlin pascal jsx viml actionscript cython turtle less mathematica xslt scheme perl6 edn ortran java-server-pages standard-ml cmake json5S vala vue reemarker graphql twig tel pod dockerfile yacc postscript racket eagle haxe julia handlebars smarty visual-basic literate-haskell smalltalk isabelle nimrod zig m4 max elixir mako 2395.8 2289.28 2289.24 2223.14 2219.25 2194.68 2124.74 948.21 844.15 736.59 698.95 616.56 475.04 441.46 249.24 223.16 186.94 178.55 173.07 133.48 132.07 1108.2 104.51 1093.8 032.33 004.84 958.96 869.83 859.02 849.73 845.7 800.73 796.64 785.68 772.9 752.07 740.82 720.94 681.52 673.74 665.89 655.82 652.86 621.38 603.58 603.56 558.12 543.01 56873 55381 66840 55584 124266 42511 139148 74062 28819 25927 3882 88634 925 27956 30546 | 2308.07124#100 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 101 | 7 SEARCH AGENT With the development of LLMs, IR systems are also facing new changes. Among them, developing LLMs as intelli- gent agents has attracted more and more attention. This conceptual shift aims to mimic human browsing patterns, thereby enhancing the capability of these models to handle complex retrieval tasks. Empowered by the advanced nat- ural language understanding and generation capabilities of LLMs, these agents can autonomously search, interpret, and synthesize information from a wide range of sources.
One way to achieve this ability is to design a pipeline that combines a series of modules and assigns different roles to them. Such a pre-defined pipeline mimics usersâ behaviors on the web by breaking it into several sub-tasks which are performed by different modules. However, this kind of static agent cannot deal with the complex nature of usersâ behavior sequences on the web and may face challenges when interacting with real-world environments. An alternative solution is to allow LLMs to freely explore the web and make interactions themselves, namely letting the LLM itself decide what action it will take next based on the feedback from the environment (or humans). These agents have more flexibility and act more like human beings.
# 7.1 Static Agent | 2308.07107#101 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 101 | 603.56 558.12 543.01 56873 55381 66840 55584 124266 42511 139148 74062 28819 25927 3882 88634 925 27956 30546 12167 2289 13463 53574 20097 58446 1827 14822 68967 36216 2009 39588 16407 14922 259379 8230 903 16615 2237 28447 22695 49842 41065 10511 10729 11741 8359 12023 4290 12465 2259 35473 8943 1.96 2.06 0.66 0.15 5.37 0.05 5.5 1.96 0.12 0.31 0.05 3.72 0.01 0.26 0.42 0.27 0.09 0.14 0.45 0.15 2.27 0.08 0.12 1.38 1.03 0.03 3.96 0.29 0.15 0.1 0.01 0.02 0.2 0.01 0.34 0.31 3.29 1.59 0.15 0.02 0.46 0.01 0.24 0.01 0.26 2.35 0.76 765 991 254 16 2214 25 2199 1063 123 21 1360 99 213 122 48 70 173 72 981 33 587 510 17 1610 103 54 39 117 174 180 1429 737 48 284 67 101 1150 170 0.13 0.13 | 2308.07124#101 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 102 | # 7.1 Static Agent
To mimic human search patterns, a straightforward ap- proach is to design a static system to browse the web and synthesize information step by step [209â214]. By breaking the information-seeking process into multiple subtasks, they design a pipeline that contains various LLM-based modules in advance and assigns different subtasks to them.
LaMDA [209] serves as an early work of the static agent. It consists of a family of Transformer-based neural language models specialized for dialog, with up to 137B parameters, pre-trained on 1.56T tokens from public dialogue data and web text. The study emphasizes the modelâs development
18 | 2308.07107#102 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07107 | 103 | through a static pipeline, encompassing large-scale pre- training, followed by strategic fine-tuning stages aimed at enhancing three critical aspects: dialogue quality, safety, and groundedness. It can integrate external IR systems for factual grounding. This integration allows LaMDA to access and use external and authoritative sources when generat- ing responses. SeeKeR [210] also incorporates the Internet search into its modular architecture for generating more fac- tual responses. It performs three sequential tasks: generating a search query, generating knowledge from search results, and generating a final response. GopherCite [213] uses a search engine like Google Search to find relevant sources. It then synthesizes a response that includes verbatim quotes from these sources as evidence, aligning the Gopherâs out- put with verified information. WebAgent [212] develops a series of tasks, including instruction decomposition and planning, action programming, and HTML summarization. It can navigate the web, understand and synthesize infor- mation from multiple sources, and execute web-based tasks, effectively functioning as an advanced search and interac- tion agent. WebGLM [211] designs an | 2308.07107#103 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 103 | dart powershell f# dm kotlin pascal jsx viml actionscript cython turtle less mathematica xslt scheme perl6 edn fortran java-server-pages standard-ml cmake json5 vala vue freemarker graphql twig tcl pod dockerfile yacc postscript racket eagle haxe julia handlebars smarty visual-basic literate-haskell smalltalk isabelle nimrod zig m4 max elixir mako arduino jade haml elm purebasic coldfusion lean r cuda textile robotframework | 2308.07124#103 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 104 | mation from multiple sources, and execute web-based tasks, effectively functioning as an advanced search and interac- tion agent. WebGLM [211] designs an LLM-augmented re- triever, a bootstrapped generator, and a human preference- aware scorer. These components work together to provide accurate web-enhanced question-answering capabilities that are sensitive to human preferences. Shi et al. [214] focus on enhancing the relevance, responsibility, and trustworthiness of LLMs in web search applications via an intent-aware gen- erator, an evidence-sensitive validator, and a multi-strategy supported optimizer. | 2308.07107#104 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 104 | 2395.8 2289.28 2289.24 2223.14 2219.25 2194.68 2124.74 1948.21 1844.15 1736.59 1698.95 1616.56 1475.04 1441.46 1249.24 1223.16 1186.94 1178.55 1173.07 1133.48 1132.07 1108.2 1104.51 1093.8 1032.33 1004.84 958.96 869.83 859.02 849.73 845.7 800.73 796.64 785.68 772.9 752.07 740.82 720.94 681.52 673.74 665.89 655.82 652.86 621.38 603.58 603.56 558.12 543.01 534.18 531.4 502.01 481.97 474.28 470.78 470.03 454.32 437.67 425.12 421.61 | 2308.07124#104 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 105 | # 7.2 Dynamic Agent
Instead of statically arranging LLMs in a pipeline, We- bGPT [24] takes an alternate approach by training LLMs to use search engines automatically. This is achieved through the application of a reinforcement learning framework, within which a simulated environment is constructed for GPT-3 models. Specifically, the WebGPT model employs special tokens to execute actions such as querying, scrolling through rankings, and quoting references on search en- gines. This innovative approach allows the GPT-3 model to use search engines for text generation, enhancing the reliability and real-time capability of the generated texts. A following study [215] has extended this paradigm to the domain of Chinese question answering. Besides, some works develop important benchmarks for interactive web- based agents [216â218]. For example, WebShop [217] aims to provide a scalable, interactive web-based environment for language understanding and decision-making, focusing on the task of online shopping. ASH (Actor-Summarizer- Hierarchical) prompting [219] significantly enhances the ability of LLMs on WebShop benchmark. It first takes a raw observation from the environment and produces a new, more meaningful representation that aligns with the specific goal. Then, it dynamically predicts the next action based on the summarized observation and the interaction history.
# 7.3 Limitations | 2308.07107#105 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 105 | 56873 55381 66840 55584 124266 42511 139148 74062 28819 25927 3882 88634 925 27956 30546 12167 2289 13463 53574 20097 58446 1827 14822 68967 36216 2009 39588 16407 14922 259379 8230 903 16615 2237 28447 22695 49842 41065 10511 10729 11741 8359 12023 4290 12465 2259 35473 8943 32350 46993 74792 18542 36 9263 7507 12858 11450 18491 9211
0.06 0.06 0.06 0.06 0.06 0.06 0.06 0.05 0.05 0.05 0.05 0.04 0.04 0.04 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 | 2308.07124#105 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 106 | # 7.3 Limitations
Though the aspect of static search agents has been thor- oughly studied, the literature on dynamic search agents remains limited. Some agents may lack mechanisms for
real-time fact-checking or verification against authoritative sources, leading to the potential dissemination of misinfor- mation. Moreover, since LLMs are trained on data from the Internet, they may inadvertently perpetuate biases present in the training data. This can lead to biased or offensive outputs and may collect unethical content from the web. Finally, as LLMs process user queries, there are concerns regarding user privacy and data security, especially if sensi- tive or personal information is involved in the queries.
8 FUTURE DIRECTION In this survey, we comprehensively reviewed recent ad- vancements in LLM-enhanced IR systems and discussed their limitations. Since the integration of LLMs into IR systems is still in its early stages, there are still many opportunities and challenges. In this section, we summarize the potential future directions in terms of the four modules in an IR system we just discussed, namely query rewriter, retriever, reranker, and reader. In addition, as evaluation has also emerged as an important aspect, we will also introduce the corresponding research problems that need to be addressed in the future. Another discussion about important research topics on applying LLMs to IR can be found in a recent perspective paper [53].
# 8.1 Query Rewriter | 2308.07107#106 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 106 | 1.96 2.06 0.66 0.15 5.37 0.05 5.5 1.96 0.12 0.31 0.05 3.72 0.01 0.26 0.42 0.27 0.09 0.14 0.45 0.15 2.27 0.08 0.12 1.38 1.03 0.03 3.96 0.29 0.15 0.1 0.01 0.02 0.2 0.01 0.34 0.31 3.29 1.59 0.15 0.02 0.46 0.01 0.24 0.01 0.26 0 2.35 0.76 0.46 2.35 10.74 0.62 0.02 0.02 0.02 0.23 0.07 0.18 0.21
765 991 254 16 2214 25 2199 1063 49 123 21 1360 1 99 213 122 48 70 173 72 981 33 50 587 510 17 1610 103 54 39 3 9 117 4 174 180 1429 737 48 7 284 2 67 4 101 0 1150 170 225 1119 4415 265 5 9 3 121 25 61 85
23 | 2308.07124#106 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 107 | # 8.1 Query Rewriter
LLMs have enhanced query rewriting for both ad-hoc and conversational search scenarios. Most of the existing meth- ods rely on prompting LLMs to generate new queries. While yielding remarkable results, the refinement of rewriting quality and the exploration of potential application scenar- ios require further investigation.
⢠Rewriting queries according to ranking performance. A typical paradigm of prompting-based methods is providing LLMs with several ground-truth rewriting cases (optional) and the task description of query rewriting. Despite LLMs being capable of identifying potential user intents of the query [220], they lack awareness of the resulting retrieval quality of the rewritten query. The absence of this connec- tion can result in rewritten queries that seem correct yet pro- duce unsatisfactory ranking results. Although some existing studies have used reinforcement learning to adjust the query rewriting process according to generation results [100], a substantial realm of research remains unexplored concern- ing the integration of ranking results. | 2308.07107#107 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07107 | 108 | ⢠Improving query rewriting in conversational search. As yet, primary efforts have been made to improve query rewriting in ad-hoc search. In contrast, conversational search presents a more developed landscape with a broader scope for LLMs to contribute to query understanding. By incorporating historical interactive information, LLMs can adapt system responses based on user preferences, providing a more effective conversational experience. However, this potential has not been explored in depth. In addition, LLMs could also be used to simulate user behavior in conversational search scenarios, providing more training data, which are urgently needed in current research.
⢠Achieving personalized query rewriting. LLMs offer valu- able contributions to personalized search through their ca- pacity to analyze user-specific data. In terms of query rewrit- ing, with the excellent language comprehension ability of
19
LLMs, it is possible to leverage them to build user profiles based on usersâ search histories (e.g., issued queries, click- through behaviors, and dwell time). This empowers the achievement of personalized query rewriting for enhanced IR and finally benefits personalized search or personalized recommendation.
# 8.2 Retriever
Leveraging LLMs to improve retrieval models has received considerable attention, promising an enhanced understand- ing of queries and documents for improved ranking per- formance. However, despite strides in this field, several challenges and limitations still need to be investigated in the future: | 2308.07107#108 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 108 | abap 409.62 1955 0.0 0.01 1 0.0 rdoc 397.03 38760 0.0 0.55 270 0.04 Ilvm 382.2 10727 0.0 1.6 780 0.1 ada 380.7 13258 0.0 0.73 265 0.05 batchfile 372.16 43674 0.0 2.98 1466 0.19 qml 361.45 19360 0.0 0.94 368 0.06 jasmin 359.82 4782 0.0 0.05 9 0.0 assembly 343.62 8126 0.0 0.17 105 0.01 g-code 334.96 3690 0.0 0.04 7 0.0 cucumber 331.38 26677 0.0 2.59 976 0.17 html+php 323.35 18381 0.0 0.33 150 0.02 icad 321.94 759 0.0 0 0 0.0 api-blueprint 317.85 4765 0.0 0.06 23 0.0 eiffel 311.48 373 0.0 0.01 2 0.0 toml 292.68 63517 0.0 5.58 3424 0.36 modelica 284.62 2611 0.0 0.04 15 0.0 bitbake 277.58 43239 0.0 4.46 1308 0.29 lex 275.96 | 2308.07124#108 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 109 | ⢠Reducing the latency of LLM-based retrievers. LLMs, with their massive parameters and world knowledge, often entail high latency during the inferring process. This delay poses a significant challenge for practical applications of LLM-based retrievers, as search engines require in-time responses. To address this issue, promising research directions include transferring the capabilities of LLMs to smaller models, exploring quantization techniques for LLMs in IR tasks, and so on.
⢠Simulating realistic queries for data augmentation. Since the high latency of LLMs usually blocks their online applica- tion for retrieval tasks, many existing studies have leveraged LLMs to augment training data, which is insensitive to inference latency. Existing methods that leverage LLMs for data augmentation often generate queries without aligning them with real user queries, leading to noise in the training data and limiting the effectiveness of retrievers. As a conse- quence, exploring techniques such as reinforcement learning to enable LLMs to simulate the way that real queries are issued holds the potential for improving retrieval tasks. | 2308.07107#109 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 109 | modelica 284.62 2611 0.0 0.04 15 0.0 bitbake 277.58 43239 0.0 4.46 1308 0.29 lex 275.96 705 0.0 0 0 0.0 stylus 273.06 21967 0.0 0.95 480 0.06 protocol-buffer 254.12 9202 0.0 0.52 181 0.03 unknown 252.23 30570 0.0 3.05 1597 0.2 nit 244.54 4951 0.0 0.02 3 0.0 âactor 241.19 15378 0.0 0.36 113 0.02 XS 239.04 3215 0.0 0.02 7 0.0 sass 230.65 23144 0.0 1.36 705 0.09 pir 230.2 6231 0.0 0.08 23 0.0 html+django 217.04 10535 0.0 0.85 399 0.06 mediawiki 214.32 10188 0.0 0.08 33 0.0 logos 212.3 1733 0.0 0.04 19 0.0 genshi 209.3 956 0.0 0.02 3 0.0 coldfusion-cfc 208.16 4410 0.0 0.05 20 0.0 xtend 79.54 7715 0.0 0.13 55 0.0 sqf | 2308.07124#109 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 110 | ⢠Incremental indexing for generative retrieval. As elabo- rated in Section 4.2.2, the emergence of LLMs has paved the way for generative retrievers to generate document identifiers for retrieval tasks. This approach encodes doc- ument indexes and knowledge into the LLM parameters. However, the static nature of LLM parameters, coupled with the expensive fine-tuning costs, poses challenges for updating document indexes in generative retrievers when new documents are added. Therefore, it is crucial to explore methods for constructing an incremental index that allows for efficient updates in LLM-based generative retrievers.
⢠Supporting multi-modal search. Web pages usually con- tain multi-modal information, including texts, images, au- dios, and videos. However, existing LLM-enhanced IR sys- tems mainly support retrieval for text-based content. A straightforward solution is to replace the backbone with multi-modal large models, such as GPT-4 [80]. However, this undoubtedly increases the cost of deployment. A promising yet challenging direction is to combine the language un- derstanding capability of LLMs with existing multi-modal retrieval models. By this means, LLMs can contribute their language skills in handling different types of content.
# 8.3 Reranker | 2308.07107#110 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 110 | coldfusion-cfc 208.16 4410 0.0 0.05 20 0.0 xtend 79.54 7715 0.0 0.13 55 0.0 sqf 68.66 TT18 0.0 0.09 45 0.0 vhdl 55.95 2185 0.0 0.02 5 0.0 antlr 43.55 3651 0.0 0.03 15 0.0 systemverilog 40.19 3944 0.0 0.08 35 0.0 hel 36.75 13379 0.0 0.91 421 0.06 asp 136.1 4286 0.0 0.09 22 0.0 nsis 29.12 4048 0.0 0.06 15 0.0 inform-7 20.19 184 0.0 0.01 2 0.0 slim 19.04 18726 0.0 2.06 1052 0.13 groovy-server-pages 17.37 6695 0.0 0.07 25 0.0 ceylon 16.14 7256 0.0 0.1 49 0.0 fish 11.28 15351 0.0 1.33 813 0.09 processing 08.58 5912 0.0 0.07 35 0.0 component-pascal 105.5 43 0.0 0) 0) 0.0 lasso 04.17 67 0.0 0 0 0.0 glsl | 2308.07124#110 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 111 | # 8.3 Reranker
In Section 5, we have discussed the recent advanced tech- niques of utilizing LLMs for the reranking task. Some poten- tial future directions in reranking are discussed as follows.
⢠Enhancing the online availability of LLMs. Though effec- tive, many LLMs have a massive number of parameters, making it challenging to deploy them in online applications. Besides, many reranking methods [152, 153] rely on calling LLM APIs, incurring considerable costs. Consequently, de- vising effective approaches (such as distilling to small mod- els) to enhance the online applicability of LLMs emerges as a research direction worth exploring.
⢠Improving personalized search. Many existing LLM-based reranking methods mainly focus on the ad-hoc reranking task. However, by incorporating user-specific information, LLMs can also improve the effectiveness of the personalized reranking task. For example, by analyzing usersâ search his- tory, LLMs can construct accurate user profiles and rerank the search results accordingly, providing personalized re- sults with higher user satisfaction. | 2308.07107#111 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07107 | 112 | ⢠Adapting to diverse ranking tasks. In addition to doc- ument reranking, there are also other ranking tasks, such as response ranking, evidence ranking, entity ranking and etc., which also belong to the universal information access system. Navigating LLMs towards adeptness in these di- verse ranking tasks can be achieved through specialized methodologies, such as instruction tuning. Exploring this avenue holds promise as an intriguing and valuable re- search trajectory.
# 8.4 Reader
With the increasing capabilities of LLMs, the future inter- action between users and IR systems will be significantly changed. Due to the powerful natural language processing and understanding capabilities of LLMs, the traditional search paradigm of providing ranking results is expected to be progressively replaced by synthesizing conclusive an- swering passages for user queries using the reader module. Although such strategies have already been investigated by academia and facilitated by industry as we stated in Section 6, there still exists much room for exploration.
⢠Improving the reference quality for LLMs. To support answer generation, existing approaches usually directly feed the retrieved documents to the LLMs as references. How- ever, since a document usually covers many topics, some passages in it may be irrelevant to the user queries and can introduce noise during LLMsâ generation. Therefore, it is necessary to explore techniques for extracting relevant snip- pets from retrieved documents, enhancing the performance of retrieval-augmented generation. | 2308.07107#112 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 112 | abap rdoc llvm ada batchfile qml jasmin assembly g-code cucumber html+php kicad api-blueprint eiffel toml modelica bitbake lex stylus protocol-buffer unknown nit factor xs sass pir html+django mediawiki logos genshi coldfusion-cfc xtend sqf vhdl antlr systemverilog hcl asp nsis inform-7 slim groovy-server-pages ceylon fish processing component-pascal lasso glsl saltstack xbase autohotkey liquid purescript agda inno-setup oz chapel arc opencl | 2308.07124#112 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 113 | ⢠Improving the answer reliability of LLMs. Incorporat- ing the retrieved references has significantly alleviated the âhallucinationâ problem of LLMs. However, it remains un- certain whether the LLMs refer to these supported mate- rials during answering queries. Some studies [196] have revealed that LLMs can still provide unfaithful answers even with additional references. Therefore, the reliability of the conclusive answers might be lower compared to the ranking results provided by traditional IR systems. It is essential to investigate the influence of these references on the generation process, thereby improving the credibility of reader-based novel IR systems.
20
# 8.5 Search Agent
With the outstanding performance of LLMs, the patterns of searching may completely change from traditional IR systems to autonomous search agents. In Section 7, we have discussed many existing works that utilize a static or dynamic pipeline to autonomously browse the web. These works are believed to be the pioneering works of the new searching paradigm. However, there is still plenty of room for further improvements. | 2308.07107#113 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 113 | 409.62 397.03 382.2 380.7 372.16 361.45 359.82 343.62 334.96 331.38 323.35 321.94 317.85 311.48 292.68 284.62 277.58 275.96 273.06 254.12 252.23 244.54 241.19 239.04 230.65 230.2 217.04 214.32 212.3 209.3 208.16 179.54 168.66 155.95 143.55 140.19 136.75 136.1 129.12 120.19 119.04 117.37 116.14 111.28 108.58 105.5 104.17 99.49 98.2 94.42 94.22 93.79 92.41 92.06 91.36 90.48 89.62 87.21 86.43
1955 38760 10727 13258 43674 19360 4782 8126 3690 26677 18381 759 4765 373 63517 2611 43239 705 21967 9202 30570 4951 15378 3215 23144 6231 10535 10188 1733 956 4410 7775 7778 2185 3651 3944 13379 4286 4048 184 18726 6695 7256 15351 5912 43 67 9478 12314 1670 1452 2651 5024 4956 3014 1551 26447 758 2489 | 2308.07124#113 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 114 | ⢠Enhancing the Trustworthiness of LLMs. When LLMs are enabled to browse the web, it is important to ensure the validity of retrieved documents. Otherwise, the unfaithful information may increase the LLMsâ âhallucinationâ prob- lem. Besides, even if the gathered information has high quality, it remains unclear whether they are really used for synthesizing responses. A potential strategy to address this issue is enabling LLMs to autonomously validate the documents they scrape. This self-validation process could incorporate mechanisms for assessing the credibility and accuracy of the information within these documents.
⢠Mitigating Bias and Offensive Content in LLMs. The pres- ence of biases and offensive content within LLM outputs is a pressing concern. This issue primarily stems from biases in- herent in the training data and will be amplified by the low- quality information gathered from the web. Achieving this requires a multi-faceted approach, including improvements in training data, algorithmic adjustments, and continuous monitoring for bias and inappropriate content that LLMs collect and generate.
# 8.6 Evaluation | 2308.07107#114 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07107 | 115 | # 8.6 Evaluation
LLMs have attracted significant attention in the field of IR due to their strong ability in context understanding and text generation. To validate the effectiveness of LLM-enhanced IR approaches, it is crucial to develop appropriate evalua- tion metrics. Given the growing significance of readers as integral components of IR systems, the evaluation should consider two aspects: assessing ranking performance and evaluating generation performance.
⢠Generation-oriented ranking evaluation. Traditional eval- uation metrics for ranking primarily focus on comparing the retrieval results of IR models with ground-truth (rele- vance) labels. Typical metrics include precision, recall, mean reciprocal rank (MRR) [221], mean average precision (MAP), and normalized discounted cumulative gain (nDCG) [222]. These metrics measure the alignment between ranking re- sults and human preference on using these results. Nev- ertheless, these metrics may fall short in capturing a doc- umentâs role in the generation of passages or answers, as their relevance to the query alone might not adequately reflect this aspect. This effect could be leveraged as a means to evaluate the usefulness of documents more comprehen- sively. A formal and rigorous evaluation metric for ranking that centers on generation quality has yet to be defined. | 2308.07107#115 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 115 | 0.01 0.55 1.6 0.73 2.98 0.94 0.05 0.17 0.04 2.59 0.33 0 0.06 0.01 5.58 0.04 4.46 0 0.95 0.52 3.05 0.02 0.36 0.02 1.36 0.08 0.85 0.08 0.04 0.02 0.05 0.13 0.09 0.02 0.03 0.08 0.91 0.09 0.06 0.01 2.06 0.07 0.1 1.33 0.07 0 0 0.34 1.41 0.01 0.02 0.09 0.17 0.02 0.06 0.03 0.04 0.01 0.05
1 270 780 265 1466 368 9 105 7 976 150 0 23 2 3424 15 1308 0 480 181 1597 3 113 7 705 23 399 33 19 3 20 55 45 5 15 35 421 22 15 2 1052 25 49 813 35 0 0 164 617 3 15 30 80 10 16 8 20 2 23
24 | 2308.07124#115 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 116 | ⢠Text generation evaluation. The wide application of LLMs in IR has led to a notable enhancement in their generation capability. Consequently, there is an imperative demand for novel evaluation strategies to effectively evaluate the per- formance of passage or answer generation. Previous evalu- ation metrics for text generation have several limitations,
including: (1) Dependency on lexical matching: methods such as BLEU [223] or ROUGE [224] primarily evaluate the quality of generated outputs based on n-gram matching. This approach cannot account for lexical diversity and con- textual semantics. As a result, models may favor generating common phrases or sentence structures rather than produc- ing creative and novel content. (2) Insensitivity to subtle differences: existing evaluation methods may be insensitive to subtle differences in generated outputs. For example, if a generated output has minor semantic differences from the reference answer but is otherwise similar, traditional meth- ods might overlook these nuanced distinctions. (3) Lack of ability to evaluate factuality: LLMs are prone to generating âhallucinationâ problems [225â228]. The hallucinated texts can closely resemble the oracle texts in terms of vocabulary usage, sentence structures, and patterns, while having non- factual content. Existing methods are hard to identify such problems, while the incorporation of additional knowledge sources such as knowledge bases or reference texts could potentially aid in addressing this challenge.
# 8.7 Bias | 2308.07107#116 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.