doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2308.07107
180
[105] C. Huang, C. Hsu, T. Hsu, C. Li, and Y. Chen, “CON- VERSER: few-shot conversational dense retrieval with synthetic data generation,” in Proceedings of the 24th Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL 2023, Prague, Czechia, September 11 - 15, 2023, D. Schlangen, S. Stoyanchev, S. Joty, O. Dusek, C. Kennington, and M. Alikhani, Eds. Association for Computational Linguistics, 2023, pp. 381–387. [106] M. Li, H. Zhuang, K. Hui, Z. Qin, J. Lin, R. Jager- man, X. Wang, and M. Bendersky, “Generate, filter, and fuse: Query expansion via multi-step keyword generation for zero-shot neural rankers,” CoRR, vol. 27 abs/2311.09175, 2023. [107] A. Anand, V. V, V. Setty, and A. Anand, “Context aware query rewriting for text rankers using LLM,” CoRR, vol. abs/2308.16753, 2023.
2308.07107#180
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
180
def mean_absolute_deviation(numbers: List[ float]) -> float: """ For a given list of input numbers, calculate Mean Absolute Deviation around the mean of this dataset. Mean Absolute Deviation is the average absolute difference between each element and a centerpoint (mean in this case): MAD = average | x - x_mean | >>> mean_absolute_deviation([1.0, 2.0, 3.0, 4.0]) 1.0 """ mean = sum(numbers) / len(numbers) return sum(abs(x - mean) for x in numbers) / mean Figure 15: Variable misuse bug example. The buggy code (right) incorrectly divides by the mean. def flip_case(string: str) -> str: """ For a given string, flip lowercase characters to uppercase and uppercase to lowercase. >>> flip_case(’Hello’) ’hELLO’ """ return string.swapcase() def flip_case(string: str) -> str: """ For a given string, flip lowercase characters to uppercase and uppercase to lowercase. >>> flip_case(’Hello’) ’hELLO’ """ return string.lower() Figure 16: Function misuse bug example. The buggy code (right) incorrectly uses the ’lower()’ function. 38
2308.07124#180
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
181
[108] T. Shen, G. Long, X. Geng, C. Tao, T. Zhou, and D. Jiang, “Large language models are strong zero-shot retriever,” CoRR, vol. abs/2304.14233, 2023. [109] M. Alaofi, L. Gallagher, M. Sanderson, F. Scholer, and P. Thomas, “Can generative llms create query variants for test collections? an exploratory study,” in Proceed- ings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2023, Taipei, Taiwan, July 23-27, 2023, H. Chen, W. E. Duh, H. Huang, M. P. Kato, J. Mothe, and B. Poblete, Eds. ACM, 2023, pp. 1869–1873.
2308.07107#181
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
181
Figure 16: Function misuse bug example. The buggy code (right) incorrectly uses the ’lower()’ function. 38 # OctoPack: Instruction Tuning Code Large Language Models # L PERFORMANCE BREAKDOWN BY HUMANEVALFIX BUG TYPE All bugs in HUMANEVALFIX are categorized into bug types as described in Appendix K. In Table 15, we break down the HUMANEVALFIX performance of select models from Table 2 by bug type. We find that models struggle most with bugs that require removing excess logic (e.g. Figure 12). WizardCoder is only able to solve 11% of excess logic bugs while solving about four times more bugs that relate to value misuse. The performance of OCTOGEEX and OCTOCODER is more stable than WizardCoder across the different bug types, possibly due to the diversity of COMMITPACKFT as displayed in Figure 2. GPT-4 performs best across all bug types.
2308.07124#181
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
182
[110] W. Yu, D. Iter, S. Wang, Y. Xu, M. Ju, S. Sanyal, C. Zhu, M. Zeng, and M. Jiang, “Generate rather than retrieve: Large language models are strong context generators,” in The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. [111] T. Nguyen, M. Rosenberg, X. Song, J. Gao, S. Tiwary, R. Majumder, and L. Deng, “MS MARCO: A human generated machine reading comprehension dataset,” in CoCo@NIPS, ser. CEUR Workshop Proceedings, vol. 1773. CEUR-WS.org, 2016.
2308.07107#182
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
182
Bug type Subtype OCTOGEEX OCTOCODER WizardCoder GPT-4 Missing logic Excess logic Wrong logic Value misuse Operator misuse Variable misuse Function misuse 24.2 16.3 33.2 32.8 35.7 25.0 24.4 16.9 34.7 42.0 33.7 37.5 31.2 11.0 45.1 34.4 30.4 37.5 Overall 28.1 30.4 31.8 45.5 38.7 50.0 56.0 43.5 50.0 47.0 Table 15: Breakdown of HUMANEVALFIX Python pass@1 (%) performance by bug type for select models. Statistics for each bug type are in Table 14. # M HYPERPARAMETERS StarCoder finetuning (OCTOCODER) For all experiments finetuning StarCoder, we use a learning rate of 5e-4 with a cosine schedule and linear warmup. We use a batch size of 32 and train for up to one epoch, as we did not observe benefits from more steps. OCTOCODER was trained for 35 steps with a sequence length of 2048 and packing corresponding to 2.2 million total finetuning tokens.
2308.07124#182
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
183
[112] T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. P. Parikh, C. Alberti, D. Epstein, I. Polosukhin, J. Devlin, K. Lee, K. Toutanova, L. Jones, M. Kelcey, M. Chang, A. M. Dai, J. Uszkoreit, Q. Le, and S. Petrov, “Natural questions: a benchmark for question answer- ing research,” Trans. Assoc. Comput. Linguistics, vol. 7, pp. 452–466, 2019. [113] W. Peng, G. Li, Y. Jiang, Z. Wang, D. Ou, X. Zeng, D. Xu, T. Xu, and E. Chen, “Large language model based long-tail query rewriting in taobao search,” CoRR, vol. abs/2311.03758, 2023.
2308.07107#183
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07107
184
[114] Z. Du, Y. Qian, X. Liu, M. Ding, J. Qiu, Z. Yang, and J. Tang, “GLM: general language model pretraining with autoregressive blank infilling,” in Proceedings of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, S. Muresan, P. Nakov, and A. Villavicencio, Eds. Association for Computa- tional Linguistics, 2022, pp. 320–335.
2308.07107#184
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
184
SantaCoder pretraining (SANTACODERPACK) We follow the setup from Allal et al. (2023) to pretrain on COMMITPACK with the exception of using a sequence length of 8192 and using the tokenizer from StarCoder, which has special tokens for the commit format delimiters (see Appendix G). SANTACODERPACK utilizes Multi Query Attention (MQA) (Shazeer, 2019) but removes Fill-in-the-Middle (FIM) (Bavarian et al., 2022). We conducted pretraining on 32 A100 GPUs, totaling 250k training steps, with a global batch size of 64. Other hyperparameter settings follow SantaCoder, including using Adam with β1 = 0.9, β2 = 0.95, ϵ = 10−8, and a weight decay of 0.1. The learning rate is set to 2 × 10−4 and follows a cosine decay after warming up for 2% of the training steps. # N PROMPTS
2308.07124#184
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
185
[115] A. Yang, B. Xiao, B. Wang, B. Zhang, C. Bian, C. Yin, C. Lv, D. Pan, D. Wang, D. Yan, F. Yang, F. Deng, F. Wang, F. Liu, G. Ai, G. Dong, H. Zhao, H. Xu, H. Sun, H. Zhang, H. Liu, J. Ji, J. Xie, J. Dai, K. Fang, L. Su, L. Song, L. Liu, L. Ru, L. Ma, M. Wang, M. Liu, M. Lin, N. Nie, P. Guo, R. Sun, T. Zhang, T. Li, T. Li, W. Cheng, W. Chen, X. Zeng, X. Wang, X. Chen, X. Men, X. Yu, X. Pan, Y. Shen, Y. Wang, Y. Li, Y. Jiang, Y. Gao, Y. Zhang, Z. Zhou, and Z. Wu, “Baichuan 2: Open large-scale language models,” CoRR, vol. abs/2309.10305, 2023.
2308.07107#185
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
185
# N PROMPTS The prompting format can significantly impact performance. In the spirit of true few-shot learn- ing (Perez et al., 2021) we do not optimize prompts and go with the format provided by the respective model authors or the most intuitive format if none is provided. For each task, we define an instruction, an optional context and an optional function start (Table 16). The function start is provided to make sure the model directly completes the function without having to search for the function in the model output. These three parts are then combined in slightly different ways for each model (Figures 17-23). We implement our evaluation using open-source frameworks (Ben Allal et al., 2022; Gao et al., 2021). 39 # OctoPack: Instruction Tuning Code Large Language Models # HUMANEVALFIX # Instruction Context # Instruction # Context Fix bugs in has_close_elements. from typing import List # def has_close_elements(numbers: List[float], threshold: float) -> bool: # for idx, elem in enumerate(numbers): # for idx2, elem2 in enumerate(numbers): # if idx != idx2: # distance = elem - elem2 if distance < threshold: # return True # return False # Function start from typing import List
2308.07124#185
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
186
[116] J. Bai, S. Bai, Y. Chu, Z. Cui, K. Dang, X. Deng, Y. Fan, W. Ge, Y. Han, F. Huang, B. Hui, L. Ji, M. Li, J. Lin, R. Lin, D. Liu, G. Liu, C. Lu, K. Lu, J. Ma, R. Men, X. Ren, X. Ren, C. Tan, S. Tan, J. Tu, P. Wang, S. Wang, W. Wang, S. Wu, B. Xu, J. Xu, A. Yang, H. Yang, J. Yang, S. Yang, Y. Yao, B. Yu, H. Yuan, Z. Yuan, J. Zhang, X. Zhang, Y. Zhang, Z. Zhang, C. Zhou, J. Zhou, X. Zhou, and T. Zhu, “Qwen technical report,” CoRR, vol. abs/2309.16609, 2023.
2308.07107#186
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
186
def has_close_elements(numbers: List[float], threshold: float) -> bool: HUMANEVALEXPLAIN Instruction (Describe) Context (Describe) Provide a concise natural language description of the code using at most 213 characters. from typing import List def has_close_elements(numbers: List[float], threshold: float) -> bool: for idx, elem in enumerate(numbers): for idx2, elem2 in enumerate(numbers): if idx != idx2: distance = abs(elem - elem2) if distance < threshold: return True return False Instruction (Synthesize) Write functional code in Python according to the description. Context (Synthesize) {Description generated by the model} Function start (Synthesize) from typing import List def has_close_elements(numbers: List[float], threshold: float) -> bool: HUMANEVALSYNTHESIZE Instruction Write a Python function ‘has_close_elements(numbers: List[float], thresh- old: float) -> bool‘ to solve the following problem: Check if in given list of numbers, are any two numbers closer to each other than given threshold. »>
2308.07124#186
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
187
[117] D. Alexander, W. Kusa, and A. P. de Vries, “ORCAS- I: queries annotated with intent using weak supervi- sion,” in SIGIR ’22: The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, July 11 - 15, 2022, E. Amig ´o, P. Castells, J. Gonzalo, B. Carterette, J. S. Culpepper, and G. Kazai, Eds. ACM, 2022, pp. 3057–3066. [118] K. D. Dhole, R. Chandradevan, and E. Agichtein, “An interactive query generation assistant using llm-based prompt modification and user feedback,” CoRR, vol. abs/2311.11226, 2023. [119] O. Weller, K. Lo, D. Wadden, D. J. Lawrie, B. V. Durme, A. Cohan, and L. Soldaini, “When do generative query and document expansions fail? A comprehen- sive study across methods, retrievers, and datasets,” CoRR, vol. abs/2309.08541, 2023.
2308.07107#187
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
187
float) -> bool‘ to solve the following problem: Check if in given list of numbers, are any two numbers closer to each other than given threshold. »> has_close_elements([1.0, 2.0, 3.0], 0.5) False »> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) True from typing import List Function start def has_close_elements(numbers: List[float], threshold: float) -> bool: """ Check if in given list of numbers, are any two numbers closer to each other than given threshold. >>> has_close_elements([1.0, 2.0, 3.0], 0.5) False >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) True """
2308.07124#187
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
188
[120] L. H. Bonifacio, H. Abonizio, M. Fadaee, and R. F. Nogueira, “Inpars: Data augmentation for informa- tion retrieval using large language models,” CoRR, vol. abs/2202.05144, 2022. [121] G. Ma, X. Wu, P. Wang, Z. Lin, and S. Hu, “Pre- training with large language model-based document expansion for dense passage retrieval,” CoRR, vol. abs/2308.08285, 2023. [122] V. Jeronymo, L. H. Bonifacio, H. Abonizio, M. Fadaee, R. de Alencar Lotufo, J. Zavrel, and R. F. Nogueira, “Inpars-v2: Large language models as efficient dataset generators for information retrieval,” CoRR, vol. abs/2301.01820, 2023.
2308.07107#188
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
188
Table 16: Instructions and function examples used. If no function start or no context is present, that part is not added to the prompt (and the preceding newline is also removed). 40 # OctoPack: Instruction Tuning Code Large Language Models Question: {instruction} {context} Answer: {function_start} # Figure 17: OCTOCODER and OCTOGEEX prompting format Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} {context} ### Response: {function_start} Figure 18: WizardCoder prompting format from their codebase.9 Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} {context} ### Response:{function_start} Figure 19: InstructCodeT5+ decoder prompting format from their codebase.10 The encoder receives the same prompt without the function start. 9https://github.com/nlpxucan/WizardLM/blob/9c6fb34d012d60dc4f31348ee0a8 e35335c04564/WizardCoder/src/humaneval_gen.py#L38 10https://github.com/salesforce/CodeT5/blob/main/CodeT5%2B/humaneval/gen erate_codet5p.py#L89
2308.07124#188
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07124
189
11https://huggingface.co/HuggingFaceH4/starchat-beta 41 # OctoPack: Instruction Tuning Code Large Language Models <|system|> <|end|> <|user|> {instruction} {context}<|end|> <|assistant|> {function_start} Figure 20: StarChat-β prompting format from their documentation.11 {context} {instruction} {function_start} Figure 21: Default prompting format (StarCoder, BLOOMZ). Used for Figure 3. {context} {instruction} Figure 22: GPT-4 prompting format. Same as Figure 21 but excluding the function start, as we found the model would not continue from the provided function start, likely due to the chat mode. {context} {instruction} Start your code with: {func_start} Figure 23: GPT-4 prompting format for HUMANEVALEXPLAIN (Synthesize). We include the function start for the synthesize part of HUMANEVALEXPLAIN in case the function name is not included in the code description generated by GPT-4. <NME> {filename} <BEF> {context} <MSG> {instruction} <DFF> # Figure 24: Prompting format for Diff Codegen 2B evaluated in Appendix I. 42
2308.07124#189
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
190
[125] J. Saad-Falcon, O. Khattab, K. Santhanam, R. Flo- rian, M. Franz, S. Roukos, A. Sil, M. A. Sultan, and C. Potts, “UDAPDR: unsupervised domain adapta- tion via LLM prompting and distillation of rerankers,” in Proceedings of the 2023 Conference on Empirical Meth- ods in Natural Language Processing, EMNLP 2023, Sin- gapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguis- tics, 2023, pp. 11 265–11 279. [126] Z. Peng, X. Wu, and Y. Fang, “Soft prompt tuning for augmenting dense retrieval with large language models,” 2023. [127] D. S. Sachan, M. Lewis, D. Yogatama, L. Zettlemoyer, J. Pineau, and M. Zaheer, “Questions are all you need to train a dense passage retriever,” Transactions of the Association for Computational Linguistics, vol. 11, pp. 600–616, 2023.
2308.07107#190
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
190
# Figure 24: Prompting format for Diff Codegen 2B evaluated in Appendix I. 42 # OctoPack: Instruction Tuning Code Large Language Models # O EXAMPLES O.1 OCTOCODER # Input: # Question: Fix bugs in sum_product. #include<stdio.h> #include<vector> using namespace std; #include<algorithm> #include<math.h> #include<stdlib.h> vector<int> sum_product(vector<int> numbers){ int sum=0,product=0; for (int i=0;i<numbers.size();i++) { sum+=numbers[i]; product*=numbers[i]; } return {sum,product}; } Answer: #include<stdio.h> #include<vector> using namespace std; #include<algorithm> #include<math.h> #include<stdlib.h> vector<int> sum_product(vector<int> numbers){ OCTOCODER: } int sum=0,product=1; for (int i=0;i<numbers.size();i++) { sum+=numbers[i]; product*=numbers[i]; } return {sum,product}; Figure 25: Correct solution from OCTOCODER to a HUMANEVALFIX C++ problem. The corresponding Python problem is depicted in Figure 13. 43
2308.07124#190
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07124
191
Figure 25: Correct solution from OCTOCODER to a HUMANEVALFIX C++ problem. The corresponding Python problem is depicted in Figure 13. 43 # OctoPack: Instruction Tuning Code Large Language Models # Input: Question: Write a Java function ‘public String decodeShift(String s)‘ to solve the following problem: takes as input string encoded with encodeShift function. Returns decoded string. Answer: import java.util.*; import java.lang.*; class Solution { /** returns encoded string by shifting every character by 5 in the alphabet. */ public String encodeShift(String s) { StringBuilder sb = new StringBuilder(); for (char ch : s.toCharArray()) { sb.append((char) (’a’ + ((ch + 5 - ’a’) % 26))); } return sb.toString(); } /** takes as input string encoded with encodeShift function. Returns decoded string. */ public String decodeShift(String s) { OCTOCODER: } StringBuilder sb = new StringBuilder(); for (char ch : s.toCharArray()) { sb.append((char) (’a’ + ((ch - ’a’ - 5) % 26))); } return sb.toString(); }
2308.07124#191
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
192
[130] A. Neelakantan, T. Xu, R. Puri, A. Radford, J. M. Han, J. Tworek, Q. Yuan, N. Tezak, J. W. Kim, C. Hal- lacy, J. Heidecke, P. Shyam, B. Power, T. E. Nekoul, G. Sastry, G. Krueger, D. Schnurr, F. P. Such, K. Hsu, M. Thompson, T. Khan, T. Sherbakov, J. Jang, P. Welin- der, and L. Weng, “Text and code embeddings by contrastive pre-training,” CoRR, vol. abs/2201.10005, 2022. [131] X. Ma, L. Wang, N. Yang, F. Wei, and J. Lin, “Fine- tuning llama for multi-stage text retrieval,” CoRR, vol. abs/2310.08319, 2023.
2308.07107#192
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
192
Figure 26: Incorrect solution from OCTOCODER to a HUMANEVALSYNTHESIZE Java problem. The correct solution would be to add 21 to ch instead of subtracting 5 from it. 44 # OctoPack: Instruction Tuning Code Large Language Models # Input: Question: Provide a concise natural language description of the code using at most 671 characters. func FruitDistribution(s string,n int) int { split := strings.Split(s, " ") for _, i := range split { atoi, err := strconv.Atoi(i) if err != nil { continue } n = n - atoi } return n } Answer: OCTOCODER: The function FruitDistribution takes two arguments, a string s and an integer n. The function splits the string s into a slice of strings using the space character as the delimiter. The function iterates over the slice and converts each string to an integer using the strconv.Atoi function. If the conversion is successful, the integer is subtracted from n. The function returns the value of n after all conversions have been performed. # Input (without previous input):
2308.07124#192
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
193
[132] A. Asai, T. Schick, P. S. H. Lewis, X. Chen, G. Izac- ard, S. Riedel, H. Hajishirzi, and W. Yih, “Task-aware retrieval with instructions,” in Findings of the Associ- ation for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, A. Rogers, J. L. Boyd-Graber, and N. Okazaki, Eds. Association for Computational Linguistics, 2023, pp. 3650–3675. [133] J. Ni, C. Qu, J. Lu, Z. Dai, G. H. ´Abrego, J. Ma, V. Y. Zhao, Y. Luan, K. B. Hall, M. Chang, and Y. Yang, “Large dual encoders are generalizable retrievers,” in EMNLP. Association for Computational Linguistics, 2022, pp. 9844–9855.
2308.07107#193
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
193
# Input (without previous input): Question: Write functional code in Go according to the description. The function FruitDistribution takes two arguments, a string s and an integer n. The function splits the string s into a slice of strings using the space character as the delimiter. The function iterates over the slice and converts each string to an integer using the strconv.Atoi function. If the conversion is successful, the integer is subtracted from n. The function returns the value of n after all conversions have been performed. Answer: func FruitDistribution(s string,n int) int { OCTOCODER: } fruits := strings.Split(s, " ") for _, fruit := range fruits { if num, err := strconv.Atoi(fruit); err == nil { n -= num } } return n Figure 27: Correct solution from OCTOCODER to a HUMANEVALEXPLAIN Go problem. Given its own description OCTOCODER writes a correct solution to the problem. Its solution is more compact and efficient than the original solution. 45 # OctoPack: Instruction Tuning Code Large Language Models O.2 GPT-4 # User:
2308.07124#193
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
194
[134] G. Izacard, M. Caron, L. Hosseini, S. Riedel, P. Bo- janowski, A. Joulin, and E. Grave, “Unsupervised dense information retrieval with contrastive learn- ing,” Trans. Mach. Learn. Res., vol. 2022, 2022. [135] D. Metzler, Y. Tay, D. Bahri, and M. Najork, “Rethink- ing search: making domain experts out of dilettantes,” SIGIR Forum, vol. 55, no. 1, pp. 13:1–13:27, 2021. [136] Y. Zhou, J. Yao, Z. Dou, L. Wu, and J. Wen, “Dy- namicretriever: A pre-trained model-based IR system without an explicit index,” Mach. Intell. Res., vol. 20, no. 2, pp. 276–288, 2023.
2308.07107#194
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
194
45 # OctoPack: Instruction Tuning Code Large Language Models O.2 GPT-4 # User: Write a Go function ‘func RescaleToUnit(numbers []float64) []float64‘ to solve the following problem: Given a list of numbers (of at least two elements), apply a linear transform to that list, such that the smallest number will become 0 and the largest will become 1 >>> RescaleToUnit([1.0, 2.0, 3.0, 4.0, 5.0]) [0.0, 0.25, 0.5, 0.75, 1.0] GPT-4: Here is a Go function that solves the problem:
2308.07124#194
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07124
195
GPT-4: Here is a Go function that solves the problem: package main import ( "fmt" "math" ) func RescaleToUnit(numbers []float64) []float64 { min := numbers[0] max := numbers[0] // Find the min and max values in the list for _, number := range numbers { if number < min { min = number } if number > max { max = number } } // Apply the linear transform to each number in the list for i, number := range numbers { numbers[i] = (number - min) / (max - min) } return numbers } func main() { fmt.Println(RescaleToUnit([]float64{1.0, 2.0, 3.0, 4.0, 5.0})) } "fmt" "math" ) fmt.Println(RescaleToUnit([]float64{1.0, 2.0, 3.0, 4.0, 5.0})) This function first finds the minimum and maximum values in the list of numbers. Then it applies the linear transformation to each number in the list. The transformation is ‘(number - min) / (max - min)‘, which scales the number such that ‘min‘ becomes 0 and ‘max‘ becomes 1. The transformed numbers are returned in a new list.
2308.07124#195
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
196
[138] Y. Tay, V. Tran, M. Dehghani, J. Ni, D. Bahri, H. Mehta, Z. Qin, K. Hui, Z. Zhao, J. P. Gupta, T. Schuster, W. W. Cohen, and D. Metzler, “Transformer memory as a differentiable search index,” in NeurIPS, 2022. [139] N. Ziems, W. Yu, Z. Zhang, and M. Jiang, “Large language models are built-in autoregressive search en- gines,” in Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, A. Rogers, J. L. Boyd-Graber, and N. Okazaki, Eds. Association for Computational Linguistics, 2023, pp. 2666–2678. [140] R. F. Nogueira, W. Yang, K. Cho, and J. Lin, “Multi- stage document ranking with BERT,” CoRR, vol. abs/1910.14424, 2019. [141] J. Ju, J. Yang, and C. Wang, “Text-to-text multi-view
2308.07107#196
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
196
Figure 28: Incorrect solution from GPT-4 to a HUMANEVALSYNTHESIZE Go problem. GPT-4 includes an import that it does not use (math) which leads to a compilation error in Golang and thus the function does not pass the test cases. 46 # OctoPack: Instruction Tuning Code Large Language Models # User: Provide a concise natural language description of the code using at most 246 characters. # def truncate_number(number: float) -> float: return number % 1.0 GPT-4: The code defines a function that takes a float as input and returns its fractional part by using the modulus operator with 1.0. # User (without previous conversation history): The code defines a function that takes a float as input and returns its fractional part by using the modulus operator with 1.0. Write functional code in Python according to the description. Start your code with: def truncate_number(number: float) -> float: GPT-4: def truncate_number(number: float) -> float: return number % 1.0
2308.07124#196
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
197
[141] J. Ju, J. Yang, and C. Wang, “Text-to-text multi-view learning for passage re-ranking,” in SIGIR. ACM, 2021, pp. 1803–1807. [142] R. Pradeep, R. F. Nogueira, and J. Lin, “The expando- mono-duo design pattern for text ranking with pre- trained sequence-to-sequence models,” CoRR, vol. abs/2101.05667, 2021. [143] H. Zhuang, Z. Qin, R. Jagerman, K. Hui, J. Ma, J. Lu, J. Ni, X. Wang, and M. Bendersky, “Rankt5: Fine- tuning T5 for text ranking with ranking losses,” in Pro- ceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2023, Taipei, Taiwan, July 23-27, 2023, H. Chen, W. E. Duh, H. Huang, M. P. Kato, J. Mothe, and B. Poblete, Eds. ACM, 2023, pp. 2308–2313.
2308.07107#197
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
197
def truncate_number(number: float) -> float: GPT-4: def truncate_number(number: float) -> float: return number % 1.0 Figure 29: Correct solution from GPT-4 to a HUMANEVALEXPLAIN Python problem. GPT-4 writes a description that is significantly shorter than the docstring and thus within the character limit. It then generates the perfect solution from its own description. The docstring for this problem can be seen in Figure 12. 47 # OctoPack: Instruction Tuning Code Large Language Models User: fn has_close_elements(numbers:Vec<f32>, threshold: f32) -> bool{ for i in 0..numbers.len(){ for j in 1..numbers.len(){ if i != j { let distance:f32 = numbers[i] - numbers[j]; if distance < threshold{ return true; } } } } return false; }
2308.07124#197
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
198
[144] L. Zhang, Y. Zhang, D. Long, P. Xie, M. Zhang, and M. Zhang, “Rankinggpt: Empowering large language models in text ranking with progressive enhance- ment,” CoRR, vol. abs/2311.16720, 2023. [145] X. Zhang, S. Hofst¨atter, P. Lewis, R. Tang, and J. Lin, “Rank-without-gpt: Building gpt-independent list- wise rerankers on open-source large language mod- els,” arXiv preprint arXiv:2312.02969, 2023.
2308.07107#198
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
198
return false; } def check(has_close_elements): assert has_close_elements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) == True assert has_close_elements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) == False assert has_close_elements([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) == True assert has_close_elements([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) == False assert has_close_elements([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) == True assert has_close_elements([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) == True assert has_close_elements([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) == False check(has_close_elements) Fix bugs in has_close_elements. check(has_close_elements)
2308.07124#198
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
199
[146] P. Liang, R. Bommasani, T. Lee, D. Tsipras, D. Soylu, M. Yasunaga, Y. Zhang, D. Narayanan, Y. Wu, A. Ku- mar, B. Newman, B. Yuan, B. Yan, C. Zhang, C. Cos- grove, C. D. Manning, C. R´e, D. Acosta-Navas, D. A. Hudson, E. Zelikman, E. Durmus, F. Ladhak, F. Rong, H. Ren, H. Yao, J. Wang, K. Santhanam, L. J. Orr, L. Zheng, M. Y ¨uksekg ¨on ¨ul, M. Suzgun, N. Kim, N. Guha, N. S. Chatterji, O. Khattab, P. Henderson, Q. Huang, R. Chi, S. M. Xie, S. Santurkar, S. Gan- guli, T. Hashimoto, T. Icard, T. Zhang, V. Chaudhary, W. Wang, X. Li, Y. Mai, Y. Zhang, and Y. Koreeda, “Holistic evaluation of language models,” CoRR, vol. abs/2211.09110, 2022.
2308.07107#199
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
199
check(has_close_elements) Fix bugs in has_close_elements. GPT-4: The function has_close_elements is supposed to check if there are any two elements in the vector that are closer than a given threshold. However, there are a few issues with the current implementation: 1. The distance calculation is not correct. It should be the absolute difference between the two numbers, not just the difference. 2. The second loop should start from i+1, not 1. This is because we only need to compare each number with the numbers after it, not the numbers before it. Here is the corrected function: # fn has_close_elements(numbers:Vec<f32>, threshold: f32) -> bool{ for i in 0..numbers.len(){ for j in i+1..numbers.len(){ let distance:f32 = (numbers[i] - numbers[j]).abs(); if distance < threshold{ return true; } } } return false; } This function will now correctly check if there are any two elements in the vector that are closer than the given threshold.
2308.07124#199
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
200
[147] H. Zhuang, Z. Qin, K. Hui, J. Wu, L. Yan, X. Wang, and M. Bendersky, “Beyond yes and no: Improving zero- shot LLM rankers via scoring fine-grained relevance labels,” CoRR, vol. abs/2310.14122, 2023. [148] D. S. Sachan, M. Lewis, M. Joshi, A. Aghajanyan, W. Yih, J. Pineau, and L. Zettlemoyer, “Improving pas- sage retrieval with zero-shot question generation,” in EMNLP. Association for Computational Linguistics, 2022, pp. 3781–3797.
2308.07107#200
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
200
} This function will now correctly check if there are any two elements in the vector that are closer than the given threshold. Figure 30: Correct solution from GPT-4 to a HUMANEVALFIX Rust problem. GPT-4 correctly fixes the bug by adding the necessary ’abs’ statement to compute the absolute value. It additionally improves the efficiency of the function by starting the inner loop from ’i+1’ instead of 1 like in the original solution. As some problems in HUMANEVALPACK are not perfectly efficient, evaluating the ability of models to improve the efficiency of the solutions is another interesting benchmarking direction (Yetistiren et al., 2022). 48 # OctoPack: Instruction Tuning Code Large Language Models User: def car_race_collision(n: int): return n**3 def check(car_race_collision): assert car_race_collision(2) == 4 assert car_race_collision(3) == 9 assert car_race_collision(4) == 16 assert car_race_collision(8) == 64 assert car_race_collision(10) == 100 check(car_race_collision) # Fix bugs in car_race_collision. GPT-4:
2308.07124#200
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
201
[149] S. Zhuang, B. Liu, B. Koopman, and G. Zuccon, “Open-source large language models are strong zero- shot query likelihood models for document ranking,” in Findings of the Association for Computational Lin- guistics: EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 8807–8817. [150] S. Cho, S. Jeong, J. Seo, and J. C. Park, “Discrete prompt optimization via constrained generation for zero-shot re-ranker,” in ACL (Findings). Association for Computational Linguistics, 2023, pp. 960–971. [151] A. Drozdov, H. Zhuang, Z. Dai, Z. Qin, R. Rahimi, X. Wang, D. Alon, M. Iyyer, A. McCallum, D. Metzler, and K. Hui, “PaRaDe: Passage ranking using demon- strations with LLMs,” in Findings of the Association for Computational
2308.07107#201
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
201
check(car_race_collision) # Fix bugs in car_race_collision. GPT-4: The function car_race_collision seems to be working fine as it is. It takes an integer ’n’ as input and returns the cube of ’n’. There don’t seem to be any bugs in this function. if the function is supposed to calculate something specific related to a car race However, collision and it’s not doing that correctly, we would need more information about the expected behavior to fix it. Figure 31: Incorrect solution from GPT-4 to a HUMANEVALFIX Python problem. GPT-4 does not catch that the test cases expect the function to square, not cube n. OCTOCODER correctly modifies the 3 to 2 in 6 out of n=20 generations. 49 # OctoPack: Instruction Tuning Code Large Language Models # User:
2308.07124#201
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07124
202
def rounded_avg(n, m): if m < n: return -1 summation = 0 for i in range(n, m+1): summation += i return bin(round(summation/(m - n))) def check(rounded_avg): # Check some simple cases assert rounded_avg(1, 5) == "0b11" assert rounded_avg(7, 13) == "0b1010" assert rounded_avg(964,977) == "0b1111001010" assert rounded_avg(996,997) == "0b1111100100" assert rounded_avg(560,851) == "0b1011000010" assert rounded_avg(185,546) == "0b101101110" assert rounded_avg(362,496) == "0b110101101" assert rounded_avg(350,902) == "0b1001110010" assert rounded_avg(197,233) == "0b11010111" # Check some edge cases that are easy to work out by hand. assert rounded_avg(7, 5) == -1 assert rounded_avg(5, 1) == -1 assert rounded_avg(5, 5) == "0b101" check(rounded_avg) Fix bugs in
2308.07124#202
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
203
29 for Computational Linguistics, Dec. 2023, pp. 14 242– 14 252. [152] W. Sun, L. Yan, X. Ma, S. Wang, P. Ren, Z. Chen, D. Yin, and Z. Ren, “Is chatgpt good at search? investigating large language models as re-ranking agents,” in Pro- ceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 14 918–14 937. [153] X. Ma, X. Zhang, R. Pradeep, and J. Lin, “Zero-shot listwise document reranking with a large language model,” CoRR, vol. abs/2305.02156, 2023. [154] R. Tang, X. Zhang, X. Ma, J. Lin, and F. Ture, “Found in the middle: Permutation self-consistency improves listwise ranking in large language models,” CoRR, vol. abs/2310.07712, 2023.
2308.07107#203
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07107
204
[155] Z. Qin, R. Jagerman, K. Hui, H. Zhuang, J. Wu, J. Shen, T. Liu, J. Liu, D. Metzler, X. Wang et al., “Large lan- guage models are effective text rankers with pairwise ranking prompting,” arXiv preprint arXiv:2306.17563, 2023. [156] S. Zhuang, H. Zhuang, B. Koopman, and G. Zuccon, “A setwise approach for effective and highly efficient zero-shot ranking with large language models,” CoRR, vol. abs/2310.09497, 2023.
2308.07107#204
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
204
def check(rounded_avg): check(rounded_avg) GPT-4: The function rounded_avg is supposed to calculate the average of all numbers between n and m (inclusive), round it, and then return the binary representation of the result. There are several bugs in the function: 1. The function should return -1 if m < n, but it should also return -1 if m == n because in this case, the denominator of the average calculation would be 0, which would cause a division by zero error. 2. The function calculates the average by summing all numbers between n and m and then dividing by (m - n). However, this is incorrect because the number of integers between n and m (inclusive) is (m - n + 1), not (m - n). 3. The function uses the bin function to convert the result to binary. However, the bin function returns a string that starts with ’0b’ to indicate that it is a binary number. If the function is supposed to return the binary representation as an integer, it should remove this prefix. Here is the corrected function: # def rounded_avg(n, m):
2308.07124#204
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
205
[157] F. Ferraretto, T. Laitz, R. de Alencar Lotufo, and R. F. Nogueira, “Exaranker: Synthetic explanations im- prove neural rankers,” in Proceedings of the 46th Inter- national ACM SIGIR Conference on Research and Develop- ment in Information Retrieval, SIGIR 2023, Taipei, Taiwan, July 23-27, 2023, H. Chen, W. E. Duh, H. Huang, M. P. Kato, J. Mothe, and B. Poblete, Eds. ACM, 2023, pp. 2409–2414. [158] L. Boytsov, P. Patel, V. Sourabh, R. Nisar, S. Kundu, R. Ramanathan, and E. Nyberg, “Inpars-light: Cost- effective unsupervised training of efficient rankers,” CoRR, vol. abs/2301.02998, 2023.
2308.07107#205
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
205
# def rounded_avg(n, m): if m <= n: return -1 summation = 0 for i in range(n, m+1): summation += i average = round(summation / (m - n + 1)) return int(bin(average)[2:]) This function now correctly calculates the average, rounds it, and returns the binary representation as an integer. If n is greater than or equal to m, it returns -1. Figure 32: Incorrect solution from GPT-4 to a HUMANEVALFIX Python problem. The bug consists of a "+1" missing after "m-n". GPT-4 correctly fixes that, however, it introduces two new bugs: Retuning "-1" when m=n and removing the "0b", both of which evidently lead to the provided unit tests failing. 50 # OctoPack: Instruction Tuning Code Large Language Models O.3 WIZARDCODER # Input: Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Fix bugs in bf.
2308.07124#205
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
206
[159] A. Askari, M. Aliannejadi, E. Kanoulas, and S. Ver- berne, “Generating synthetic documents for cross- encoder re-rankers: A comparative study of chatgpt and human experts,” CoRR, vol. abs/2305.02320, 2023. [160] R. Pradeep, S. Sharifymoghaddam, and J. Lin, “Rankvicuna: Zero-shot listwise document reranking with open-source large language models,” CoRR, vol. abs/2309.15088, 2023. [161] ——, “Rankzephyr: Effective and robust zero- listwise reranking is a breeze!” CoRR, vol. shot abs/2312.02724, 2023.
2308.07107#206
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
206
def bf(planet1, planet2): planet_names = ("Mercury", "Venus", "Earth", "Mars", "Jupyter", "Saturn", "Uranus", " Neptune") if planet1 not in planet_names or planet2 not in planet_names or planet1 == planet2: return () planet1_index = planet_names.index(planet1) planet2_index = planet_names.index(planet2) if planet1_index < planet2_index: return (planet_names[planet1_index + 1: planet2_index]) else: return (planet_names[planet2_index + 1 : planet1_index]) def check(bf): # Check some simple cases assert bf("Jupiter", "Neptune") == ("Saturn", "Uranus"), "First test error: " + str(len(bf ("Jupiter", "Neptune"))) assert bf("Earth", "Mercury") == ("Venus",), "Second test error: " + str(bf("Earth", " Mercury")) assert bf("Mercury", "Uranus") == ("Venus", "Earth", "Mars", "Jupiter", "Saturn"), "Third test error: " + str(bf("Mercury", "Uranus")) assert bf("Neptune", "Venus") == ("Earth",
2308.07124#206
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
207
[162] W. Sun, Z. Chen, X. Ma, L. Yan, S. Wang, P. Ren, Z. Chen, D. Yin, and Z. Ren, “Instruction distilla- tion makes large language models efficient zero-shot rankers,” arXiv preprint arXiv:2311.01555, 2023. [163] C. J. C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. N. Hullender, “Learn- ing to rank using gradient descent,” in ICML, ser. ACM International Conference Proceeding Series, vol. 119. ACM, 2005, pp. 89–96. [164] J. A. Baktash and M. Dawodi, “Gpt-4: A review on advancements and opportunities in natural language processing,” arXiv preprint arXiv:2305.03195, 2023. [165] H. Wachsmuth, S. Syed, and B. Stein, “Retrieval of the best counterargument without prior topic knowl- edge,” in ACL (1). Association for Computational Linguistics, 2018, pp. 241–251.
2308.07107#207
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
207
"Saturn"), "Third test error: " + str(bf("Mercury", "Uranus")) assert bf("Neptune", "Venus") == ("Earth", "Mars", "Jupiter", "Saturn", "Uranus"), "Fourth test error: " + str(bf("Neptune", "Venus")) # Check some edge cases that are easy to work out by hand. assert bf("Earth", "Earth") == () assert bf("Mars", "Earth") == () assert bf("Jupiter", "Makemake") == () check(bf) ### Response: def bf(planet1, planet2):
2308.07124#207
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
208
[166] K. Guu, K. Lee, Z. Tung, P. Pasupat, and M. Chang, “Retrieval augmented language model pre-training,” in Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, ser. Proceedings of Machine Learning Research, vol. 119. PMLR, 2020, pp. 3929–3938. [167] P. S. H. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. K ¨uttler, M. Lewis, W. Yih, T. Rockt¨aschel, S. Riedel, and D. Kiela, “Retrieval- augmented generation for knowledge-intensive NLP tasks,” in Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., 2020.
2308.07107#208
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
208
# WizardCoder: planet_names = ("Mercury", "Venus", "Earth", "Mars", "Jupyter", "Saturn", "Uranus", " Neptune") if planet1 not in planet_names or planet2 not in planet_names or planet1 == planet2: return () planet1_index = planet_names.index(planet1) planet2_index = planet_names.index(planet2) if planet1_index < planet2_index: return (planet_names[planet1_index + 1: planet2_index]) else: return (planet_names[planet2_index + 1 : planet1_index]) Figure 33: Incorrect solution from WizardCoder to a HUMANEVALFIX Python problem. The bug consists of "Jupiter" being misspelled as "Jupyter" in the function. WizardCoder returns the same function without making a single change leaving the bug. This is a particularly difficult bug that only GPT-4 and OCTOCODER (only for 1 out of the n=20 generations) are able to solve. 51 # OctoPack: Instruction Tuning Code Large Language Models # O.4 BLOOMZ # Input:
2308.07124#208
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
209
[168] W. Shi, S. Min, M. Yasunaga, M. Seo, R. James, M. Lewis, L. Zettlemoyer, and W. Yih, “REPLUG: retrieval-augmented black-box language models,” CoRR, vol. abs/2301.12652, 2023. Izacard, P. S. H. Lewis, M. Lomeli, L. Hos- seini, F. Petroni, T. Schick, J. Dwivedi-Yu, A. Joulin, S. Riedel, and E. Grave, “Atlas: Few-shot learning with retrieval augmented language models,” J. Mach. Learn. Res., vol. 24, pp. 251:1–251:43, 2023. [170] A. Lazaridou, E. Gribovskaya, W. Stokowiec, and N. Grigorev, “Internet-augmented language models through few-shot prompting for open-domain ques- tion answering,” CoRR, vol. abs/2203.05115, 2022.
2308.07107#209
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
209
51 # OctoPack: Instruction Tuning Code Large Language Models # O.4 BLOOMZ # Input: const generateIntegers = (a, b) => { if (a > b) { let tmp = a; a = b; b = tmp; } let y = [] for (let i = a; i <= b; i++) { if (i == 2 || i == 4 || i == 6 || i == 8) { y.push(i) } } return y } Provide a concise natural language description of the code using at most 218 characters. # BLOOMZ: Generate a list of integers from a to b. # Input (without previous input): Generate a list of integers from a to b. Write functional code in JavaScript according to the description. const generateIntegers = (a, b) => { # BLOOMZ: } let integers = []; for (let i = a; i <= b; i++) { integers.push(i); } return integers;
2308.07124#209
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
210
[171] H. He, H. Zhang, and D. Roth, “Rethinking with retrieval: Faithful large language model inference,” CoRR, vol. abs/2301.00303, 2023. [172] W. Yu, H. Zhang, X. Pan, K. Ma, H. Wang, and D. Yu, “Chain-of-note: Enhancing robustness in retrieval-augmented language models,” CoRR, vol. abs/2311.09210, 2023. [173] O. Ram, Y. Levine, I. Dalmedigos, D. Muhlgay, A. Shashua, K. Leyton-Brown, and Y. Shoham, “In- context retrieval-augmented language models,” CoRR, vol. abs/2302.00083, 2023.
2308.07107#210
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
210
# BLOOMZ: } let integers = []; for (let i = a; i <= b; i++) { integers.push(i); } return integers; Figure 34: Incorrect solution from BLOOMZ to a HUMANEVALEXPLAIN JavaScript problem. BLOOMZ misses in its description that the function generates only the even digits between a and b, in ascending order. Thus, its synthesized function based on its own description simply collects all integers between a and b. BLOOMZ likely left out this part due to its bias for short and concise answers. However, this bias backfires for this task. O.5 STARCODER 52 # OctoPack: Instruction Tuning Code Large Language Models # Input:
2308.07124#210
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
211
[174] Z. Shao, Y. Gong, Y. Shen, M. Huang, N. Duan, and W. Chen, “Enhancing retrieval-augmented large language models with iterative retrieval-generation synergy,” in Findings of the Association for Computa- tional Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 9248–9274. [175] H. Trivedi, N. Balasubramanian, T. Khot, and A. Sab- harwal, “Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step ques- tions,” in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, A. Rogers, J. L. Boyd-Graber, and N. Okazaki, Eds. Association for Computational Linguistics, 2023, pp. 10 014–10 037. [176] Z. Jiang, F. F. Xu, L. Gao, Z. Sun, Q. Liu, J. Dwivedi30
2308.07107#211
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
211
O.5 STARCODER 52 # OctoPack: Instruction Tuning Code Large Language Models # Input: import java.util.*; import java.lang.*; class Solution { public Map<String, Integer> histogram(String test) { Map<String, Integer> dict1 = new HashMap<>(); List<String> list1 = Arrays.asList(test.split(" " )); int t = 0; for (String i : list1) { if (Collections.frequency(list1, i) > t && !i.isEmpty()) { t = Collections.frequency(list1, i); } } if (t > 0) { for (String i : list1) { if (Collections.frequency(list1, i) == t) { dict1.put(i, t); } } } return dict1; } } Provide a concise natural language description of the code using at most 423 characters. StarCoder: public class Solution { public Map<String, Integer> histogram(String test) { int t = 0; } } if (t > 0) { Write functional code in Java according to the description. import java.util.*; import java.lang.*; class Solution { public Map<String, Integer> histogram(String test) { StarCoder:
2308.07124#211
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
212
[176] Z. Jiang, F. F. Xu, L. Gao, Z. Sun, Q. Liu, J. Dwivedi30 Yu, Y. Yang, J. Callan, and G. Neubig, “Active retrieval augmented generation,” in Proceedings of the 2023 Con- ference on Empirical Methods in Natural Language Pro- cessing, EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 7969–7992. [177] A. Asai, Z. Wu, Y. Wang, A. Sil, and H. Hajishirzi, “Self-rag: Learning to retrieve, generate, and critique through self-reflection,” CoRR, vol. abs/2310.11511, 2023.
2308.07107#212
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
212
import java.util.*; import java.lang.*; class Solution { public Map<String, Integer> histogram(String test) { StarCoder: Map<String, Integer> map = new HashMap<String, Integer>(); int t = 0; for (int i = 0; i < test.length(); i++) { if (test.charAt(i) == ’t’) { t++; } } map.put("t", t); return map; } } Figure 35: Incorrect solution from StarCoder to a HUMANEVALEXPLAIN Java problem. Star- Coder is not capable of generating natural language descriptions and instead copies the solution. However, our post-processing removes substrings from the solution (§3) leading to a useless descrip- tion. Given this description it generates an irrelevant function. As StarCoder cannot generate natural language explanations, we disqualify it from HUMANEVALEXPLAIN setting its score to 0 in Table 2. 53 # OctoPack: Instruction Tuning Code Large Language Models INSTRUCTCODET5+ # Input: Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Fix bugs in has_close_elements. from typing import List
2308.07124#212
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07124
213
# Input: Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Fix bugs in has_close_elements. from typing import List def has_close_elements(numbers: List[float], threshold: float) -> bool: for idx, elem in enumerate(numbers): for idx2, elem2 in enumerate(numbers): if idx != idx2: distance = elem - elem2 if distance < threshold: return True return False
2308.07124#213
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
214
[180] X. Lyu, S. Grafberger, S. Biegel, S. Wei, M. Cao, S. Schelter, and C. Zhang, “Improving retrieval- augmented large language models via data impor- tance learning,” CoRR, vol. abs/2307.03027, 2023. [181] T. Gao, H. Yen, J. Yu, and D. Chen, “Enabling large lan- guage models to generate text with citations,” in Pro- ceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 6465–6488.
2308.07107#214
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
214
return False assert has_close_elements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) == True assert has_close_elements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) == False assert has_close_elements([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) == True assert has_close_elements([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) == False assert has_close_elements([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) == True assert has_close_elements([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) == True assert has_close_elements([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) == False check(has_close_elements) ### Response:from typing import List def has_close_elements(numbers: List[float], threshold: float) -> bool: InstructCodeT5+:
2308.07124#214
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
215
[182] H. Luo, T. Zhang, Y. Chuang, Y. Gong, Y. Kim, X. Wu, H. Meng, and J. R. Glass, “Search augmented instruc- tion learning,” in Findings of the Association for Compu- tational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 3717–3729. [183] X. V. Lin, X. Chen, M. Chen, W. Shi, M. Lomeli, R. James, P. Rodriguez, J. Kahn, G. Szilvasy, M. Lewis, L. Zettlemoyer, and S. Yih, “RA-DIT: retrieval- instruction tuning,” CoRR, vol. augmented dual abs/2310.01352, 2023.
2308.07107#215
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
215
### Response:from typing import List def has_close_elements(numbers: List[float], threshold: float) -> bool: InstructCodeT5+: for idx, elem in enumerate(numbers): for idx2, elem2 in enumerate(numbers): if idx != idx2: distance = elem - elem2 if distance < threshold: return True return False Figure 36: Incorrect solution from InstructCodeT5+ to a HUMANEVALFIX Python problem. InstructCodeT5+ simply repeated the function without adding the missing "abs" statement to turn the distance into an absolute value. This is one of the simplest bugs that most other models are able to fix. For InstructCodeT5+ the prompt starts directly after "Response:" without a newline (Figure 19). O.7 STARCHAT-β 54 # OctoPack: Instruction Tuning Code Large Language Models # Input:
2308.07124#215
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
216
[184] W. Yu, Z. Zhang, Z. Liang, M. Jiang, and A. Sabhar- wal, “Improving language models via plug-and-play retrieval feedback,” CoRR, vol. abs/2305.14002, 2023. [185] Z. Feng, X. Feng, D. Zhao, M. Yang, and B. Qin, “Retrieval-generation synergy augmented large lan- guage models,” CoRR, vol. abs/2310.05149, 2023. [186] S. Kadavath, T. Conerly, A. Askell, T. Henighan, D. Drain, E. Perez, N. Schiefer, Z. Hatfield-Dodds, N. DasSarma, E. Tran-Johnson, S. Johnston, S. E. Showk, A. Jones, N. Elhage, T. Hume, A. Chen, Y. Bai, S. Bowman, S. Fort, D. Ganguli, D. Hernandez, J. Ja- cobson, J. Kernion, S. Kravec, L. Lovitt, K. Ndousse, C. Olsson, S. Ringer, D.
2308.07107#216
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
216
O.7 STARCHAT-β 54 # OctoPack: Instruction Tuning Code Large Language Models # Input: #include<stdio.h> #include<math.h> #include<vector> #include<string> #include<algorithm> using namespace std; #include<stdlib.h> int specialFilter(vector<int> nums){ int num=0; for (int i=0;i<nums.size();i++) if (nums[i]>10) { string w=to_string(nums[i]); if (w[0]%2==1 and w[w.length()-1]%2==1) num+=1; } return num; } Provide a concise natural language description of the code using at most 296 characters. StarChat-β: This code is a solution to the "Special Array" problem on LeetCode. It uses C++ and includes several standard libraries, including vector, string, and algorithm. The code starts by declaring some variables and importing the necessary libraries. It then defines a specialFilter function that takes # Input (without previous input):
2308.07124#216
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
217
cobson, J. Kernion, S. Kravec, L. Lovitt, K. Ndousse, C. Olsson, S. Ringer, D. Amodei, T. Brown, J. Clark, N. Joseph, B. Mann, S. McCandlish, C. Olah, and J. Kaplan, “Language models (mostly) know what they know,” CoRR, vol. abs/2207.05221, 2022. [187] Z. Jiang, J. Araki, H. Ding, and G. Neubig, “How can we know When language models know? on the cali- bration of language models for question answering,” Trans. Assoc. Comput. Linguistics, vol. 9, pp. 962–977,
2308.07107#217
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
217
# Input (without previous input): This code is a solution to the "Special Array" problem on LeetCode. It uses C++ and includes several standard libraries, including vector, string, and algorithm. The code starts by declaring some variables and importing the necessary libraries. It then defines a specialFilter function that takes Write functional code in C++ according to the description. #include<stdio.h> #include<math.h> #include<vector> #include<string> #include<algorithm> using namespace std; #include<stdlib.h> int specialFilter(vector<int> nums){ # StarChat-β: int n = nums.size(); int sum = 0; for(int i = 0; i < n; i++){ sum += pow(nums[i], 2); } int target = (int)sqrt(sum); vector<int> res; for(int i = 0; i < n; i++){ if(nums[i] > target){ res.push_back(nums[i]); } } return res; }
2308.07124#217
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
218
2021. [188] O. Press, M. Zhang, S. Min, L. Schmidt, N. A. Smith, and M. Lewis, “Measuring and narrowing the compo- sitionality gap in language models,” in Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 5687–5711. [189] O. Khattab, K. Santhanam, X. L. Li, D. Hall, P. Liang, C. Potts, and M. Zaharia, “Demonstrate- search-predict: Composing retrieval and language models for knowledge-intensive NLP,” CoRR, vol. abs/2212.14024, 2022.
2308.07107#218
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
218
} Figure 37: Incorrect solution from StarChat-β to a HUMANEVALEXPLAIN C++ problem. StarChat-β has a bias for excessively long answers. StarChat-β’s description includes irrelevant parts like "..problem on LeetCode..includes several standard libraries.." (despite the problem not existing on LeetCode). As the description exceeds the 296-character limit for the code, it is cut off at the end. 55 # OctoPack: Instruction Tuning Code Large Language Models O.8 DIFF CODEGEN # Input: <NME> truncate_number.py <BEF> def truncate_number(number: float) -> float: """ Given a positive floating point number, it can be decomposed into and integer part (largest integer smaller than given number) and decimals (leftover part always smaller than 1). Return the decimal part of the number. >>> truncate_number(3.5) 0.5 """ return number % 1.0 + 1.0 <MSG> Fix bugs in truncate_number. <DFF> # Diff Codegen 2B @@ -4,7 +4,7 @@ def truncate_number(number: float) -> float: and integer part (largest integer smaller than given number) and decimals (leftover part always smaller than 1). - + Return the decimal part of the number. Return
2308.07124#218
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
219
[190] O. Yoran, T. Wolfson, B. Bogin, U. Katz, D. Deutch, and J. Berant, “Answering questions by meta-reasoning over multiple chains of thought,” in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 5942–5966. [191] M. A. Arefeen, B. Debnath, and S. Chakradhar, “Lean- context: Cost-efficient domain-specific question an- swering using llms,” CoRR, vol. abs/2309.00841, 2023. [192] F. Xu, W. Shi, and E. Choi, “RECOMP: improving retrieval-augmented lms with compression and selec- tive augmentation,” CoRR, vol. abs/2310.04408, 2023. Jiang, M. R. Parvez, and G. Neubig, “Learning to filter context for retrieval- augmented generation,” CoRR, vol. abs/2311.08377, 2023.
2308.07107#219
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
219
Figure 38: Incorrect solution from Diff Codegen 2B to a HUMANEVALFIXDOCS Python problem. Diff Codegen 2B suggests an irrelevant diff modifying parts of the docstring. The model commonly outputs diffs that modify the docstring or an import statement and rarely addresses the actual bug. # P LIMITATIONS AND FUTURE WORK Model Execution A promising avenue for improving performance on HUMANEVALFIX is letting the model execute the given code or its own generated code and inspect its output (Chen et al., 2022; 2023c; Yasunaga & Liang, 2021; Li et al., 2022a; Gao et al., 2023; Dong et al., 2023; Zhang et al., 2023c; Madaan et al., 2023b; Ni et al., 2023; Gou et al., 2023; Hu et al., 2023; Taylor et al., 2022; Nye et al., 2021). This could allow the model to discover which unit tests are failing and for what reason. The model could then simply iterate on the function until all unit tests are passing. We leave explorations of this strategy to improve performance on HUMANEVALPACK to future work.
2308.07124#219
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
220
[194] J. Liu, L. Li, T. Xiang, B. Wang, and Y. Qian, “TCRA- LLM: token compression retrieval augmented large for inference cost reduction,” in language model Findings of the Association for Computational Linguis- tics: EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 9796–9810. [195] N. F. Liu, K. Lin, J. Hewitt, A. Paranjape, M. Bevilac- qua, F. Petroni, and P. Liang, “Lost in the middle: How language models use long contexts,” CoRR, vol. abs/2307.03172, 2023. [196] R. Ren, Y. Wang, Y. Qu, W. X. Zhao, J. Liu, H. Tian, H. Wu, J. Wen, and H. Wang, “Investigating the factual knowledge boundary of large language models with retrieval augmentation,” CoRR, vol. abs/2307.11019, 2023.
2308.07107#220
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
220
Multi-file changes For the creation of COMMITPACK, we have filtered out any commits that affect multiple files to ensure commits are very specific and account for the fact that most current models are only capable of operating on a single file. Allowing models to take multiple files as input and modify multiple files given a single instruction is a promising direction for future work. There is active research on using repository-level context (Ding et al., 2022; Shrivastava et al., 2023a;b; Zhang et al., 2023a; Liu et al., 2023d) and the necessary long context windows (Dai et al., 2019; Press et al., 2021; Sun et al., 2021; Dao et al., 2022; Peng et al., 2023; Liu et al., 2023c; Chen et al., 2023b).
2308.07124#220
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
221
[197] Y. Liu, S. Yavuz, R. Meng, M. Moorthy, S. Joty, C. Xiong, and Y. Zhou, “Exploring the integration strategies of retriever and large language models,” CoRR, vol. abs/2308.12574, 2023. [198] R. Aksitov, C. Chang, D. Reitter, S. Shakeri, and Y. Sung, “Characterizing attribution and fluency tradeoffs for retrieval-augmented large language models,” CoRR, vol. abs/2302.05578, 2023. [199] A. Mallen, A. Asai, V. Zhong, R. Das, D. Khashabi, and H. Hajishirzi, “When not to trust language models: Investigating effectiveness of parametric and non- parametric memories,” in Proceedings of the 61st An- nual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), ACL 2023, Toronto, 31 Canada, July 9-14, 2023, A. Rogers, J. L. Boyd-Graber, and N. Okazaki, Eds. Association for Computational Linguistics, 2023, pp. 9802–9822.
2308.07107#221
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
221
Length-awareness Current Code LLMs including OCTOCODER struggle with awareness about the length of their generated output. For HUMANEVALEXPLAIN, we instruct the models to limit their output to a given number of characters. While it is trivial for humans to count characters and adhere to the limit, all models tested frequently generate far too many characters. Prior work has shown that human raters are biased towards preferring longer texts (Wu & Aji, 2023) regardless of content. All models evaluated are instruction tuned on text that was at least indirectly assessed by human raters, hence they may be biased towards generating longer texts even if it means including literary bloat. Better evaluation Evaluating code instruction models is challenging for several reasons: (1) Prompting: The prompt can significantly impact the performance of large language mod56 # OctoPack: Instruction Tuning Code Large Language Models
2308.07124#221
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
222
[200] Y. Wang, X. Ma, and W. Chen, “Augmenting black- box llms with medical textbooks for clinical question answering,” CoRR, vol. abs/2309.02233, 2023. and structure- S. Horawalavithana, aware for interdisciplinary science,” CoRR, vol. abs/2311.12289, 2023. [202] X. Li, E. Nie, and S. Liang, “Crosslingual retrieval augmented in-context learning for bangla,” CoRR, vol. abs/2311.00587, 2023. [203] A. Lozano, S. L. Fleming, C. Chiang, and N. Shah, “Clinfo.ai: An open-source retrieval-augmented large system for answering medical language model questions using scientific literature,” CoRR, vol. abs/2310.16146, 2023. [204] B. Zhang, H. Yang, T. Zhou, A. Babar, and X. Liu, “Enhancing financial sentiment analysis via retrieval augmented large language models,” in 4th ACM In- ternational Conference on AI in Finance, ICAIF 2023, Brooklyn, NY, USA, November 27-29, 2023. ACM, 2023, pp. 349–356.
2308.07107#222
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
222
els (Brown et al., 2020; Zhou et al., 2022; Muennighoff, 2022; Babe et al., 2023). To ensure fair evaluation we use the prompting format put forth by the respective authors of the models and a simple intuitive prompt for models without a canonical prompt (see Appendix N). However, this may put models without a canonical prompt recommendation (e.g. BLOOMZ, GPT-4) at a slight disadvantage. OCTOCODER and OCTOGEEX perform best when prompted using the same format we use during training (Figure 17) and we recommend always using this format at inference. (2) Processing: Models may accidentally impair otherwise correct code by e.g. including a natural language explanation in their output. We largely circumvent this issue through the use of strict stopping criteria and careful postprocessing (e.g. for GPT-4 we check if it has enclosed the code in backticks, and if so, extract only the inner part of the backticks discarding its explanations). (3) Execution: When executing code to compute pass@k, it is important that the generated code matches the installed programming language version. Models may inadvertently use expressions from
2308.07124#222
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
223
[205] A. Louis, G. van Dijck, and G. Spanakis, “Inter- pretable long-form legal question answering with retrieval-augmented large language models,” CoRR, vol. abs/2309.17050, 2023. [206] G. Zyskind, T. South, and A. Pentland, “Don’t forget private retrieval: distributed private similar- ity search for large language models,” CoRR, vol. abs/2311.12955, 2023. Jiang, M. Zeller, R. Waleffe, T. Hoefler, and G. Alonso, “Chameleon: a heterogeneous and disag- gregated accelerator system for retrieval-augmented language models,” CoRR, vol. abs/2310.09949, 2023.
2308.07107#223
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
223
Execution: When executing code to compute pass@k, it is important that the generated code matches the installed programming language version. Models may inadvertently use expressions from a different version (e.g. they may use the Python 2 syntax of print "hi", which would fail in a Python 3 environment). In our evaluation, we did not find this to be a problem, however, as models become more capable, it may make sense to specify the version. Future prompts may include the version (e.g. “use JDK 1.18.0”) or provide models with an execution environment that has the exact version installed that will be used for evaluation. (4) Comprehensiveness: Executing code can only reflect functional correctness lacking a comprehen- sive understanding of quality. Compared to execution-based evaluation, the human judgment of code quality can be considered more comprehensive as humans can consider factors beyond correctness. Directly hiring human annotators can be inefficient and expensive, and therefore researchers have explored approaches to automate human-aligned evaluation via LLMs (Fu et al., 2023; Liu et al., 2023e; Zhuo, 2023). However, recent work (Wang et al., 2023b)
2308.07124#223
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
224
[208] Y. Hoshi, D. Miyashita, Y. Ng, K. Tatsuno, Y. Morioka, O. Torii, and J. Deguchi, “Ralle: A framework for developing and evaluating retrieval-augmented large language models,” in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023 - System Demonstrations, Singapore, De- cember 6-10, 2023, Y. Feng and E. Lefever, Eds. Asso- ciation for Computational Linguistics, 2023, pp. 52–69. J. Hall, N. Shazeer, A. Kulshreshtha, H. Cheng, A. Jin, T. Bos, L. Baker, Y. Du, Y. Li, H. Lee, H. S. Zheng, A. Ghafouri, M. Menegali, Y. Huang, M. Krikun, D. Lepikhin, J. Qin, D. Chen, Y. Xu, Z. Chen, A. Roberts, M. Bosma, Y. Zhou, C. Chang, I. Krivokon, W. Rusch, M. Pick- ett, K.
2308.07107#224
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07107
225
A. Roberts, M. Bosma, Y. Zhou, C. Chang, I. Krivokon, W. Rusch, M. Pick- ett, K. S. Meier-Hellstern, M. R. Morris, T. Doshi, R. D. Santos, T. Duke, J. Soraker, B. Zevenbergen, V. Prabhakaran, M. Diaz, B. Hutchinson, K. Olson, A. Molina, E. Hoffman-John, J. Lee, L. Aroyo, R. Ra- jakumar, A. Butryna, M. Lamm, V. Kuzmina, J. Fenton, A. Cohen, R. Bernstein, R. Kurzweil, B. A. y Arcas, C. Cui, M. Croak, E. H. Chi, and Q. Le, “Lamda: Language models for dialog applications,” CoRR, vol. abs/2201.08239, 2022.
2308.07107#225
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07124
225
Reward Models Our commit datasets, COMMITPACK and COMMITPACKFT, also lend themselves well for learning human preferences. The changed code after a commit generally represents a human- preferred version of the code (else the code would not have been modified). Thus, one could train a reward model that given the code before and after a commit, learns that the code afterward is better. Similar to prior work (Ouyang et al., 2022), this reward model could then be used to guide a language model to generate code that is preferred by humans. # Q OCTOBADPACK
2308.07124#225
OctoPack: Instruction Tuning Code Large Language Models
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
http://arxiv.org/pdf/2308.07124
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
cs.CL, cs.AI
57 pages (9 main), 39 figures, 16 tables
null
cs.CL
20230814
20230814
[ { "id": "2302.00288" }, { "id": "2205.12374" }, { "id": "2204.05999" }, { "id": "2105.09352" }, { "id": "2212.12017" }, { "id": "2305.09857" }, { "id": "2304.12244" }, { "id": "2307.03025" }, { "id": "2204.06745" }, { "id": "2301.08653" }, { "id": "2209.13331" }, { "id": "2208.11663" }, { "id": "2212.10007" }, { "id": "2303.14100" }, { "id": "1707.02275" }, { "id": "2304.03816" }, { "id": "2302.01973" }, { "id": "2302.05527" }, { "id": "2306.03091" }, { "id": "2305.13169" }, { "id": "2306.08568" }, { "id": "2303.12712" }, { "id": "2305.14314" }, { "id": "2305.18507" }, { "id": "2202.08904" }, { "id": "2306.15595" }, { "id": "2301.13246" }, { "id": "2105.09938" }, { "id": "2211.09085" }, { "id": "2303.12570" }, { "id": "2207.14255" }, { "id": "2302.04166" }, { "id": "2005.00653" }, { "id": "2211.05100" }, { "id": "2206.08896" }, { "id": "2105.14242" }, { "id": "2305.07922" }, { "id": "2108.07732" }, { "id": "2102.04664" }, { "id": "2207.11280" }, { "id": "2305.11738" }, { "id": "1901.02860" }, { "id": "2306.04556" }, { "id": "1908.09804" }, { "id": "2111.03922" }, { "id": "2112.02721" }, { "id": "2301.03988" }, { "id": "2210.14868" }, { "id": "2304.01102" }, { "id": "2305.16264" }, { "id": "2303.17568" }, { "id": "2305.01210" }, { "id": "2306.02858" }, { "id": "2305.13048" }, { "id": "2209.07858" }, { "id": "2209.14876" }, { "id": "2306.10998" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2207.10397" }, { "id": "2307.02053" }, { "id": "2305.15717" }, { "id": "2302.07867" }, { "id": "2210.15424" }, { "id": "2204.05862" }, { "id": "2304.07590" }, { "id": "2307.03172" }, { "id": "2307.02469" }, { "id": "2308.01861" }, { "id": "2108.04631" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2212.09535" }, { "id": "2305.03726" }, { "id": "2304.14317" }, { "id": "2304.05128" }, { "id": "2305.02309" }, { "id": "2210.07316" }, { "id": "2306.11644" }, { "id": "2304.07327" }, { "id": "2211.15395" }, { "id": "2212.09803" }, { "id": "2302.05020" }, { "id": "2303.03004" }, { "id": "2211.01910" }, { "id": "2107.03374" }, { "id": "2211.01786" }, { "id": "2108.12409" }, { "id": "2306.04751" }, { "id": "2307.09288" }, { "id": "2304.08485" }, { "id": "2204.07705" }, { "id": "2203.13474" }, { "id": "2203.08388" }, { "id": "2305.06161" }, { "id": "2306.00029" }, { "id": "2212.10481" }, { "id": "2304.11158" }, { "id": "2206.08474" }, { "id": "2112.00114" }, { "id": "2303.17651" }, { "id": "2305.18584" }, { "id": "1911.02150" }, { "id": "2305.11206" }, { "id": "2211.15533" } ]
2308.07107
226
[210] K. Shuster, M. Komeili, L. Adolphs, S. Roller, A. Szlam, and J. Weston, “Language models that seek for knowledge: Modular search & generation for dialogue and prompt completion,” in Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, Y. Goldberg, Z. Kozareva, and Y. Zhang, Eds. Association for Computational Linguistics, 2022, pp. 373–393. [211] X. Liu, H. Lai, H. Yu, Y. Xu, A. Zeng, Z. Du, P. Zhang, Y. Dong, and J. Tang, “Webglm: Towards an effi- cient web-enhanced question answering system with human preferences,” in Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2023, Long Beach, CA, USA, August 6-10, 2023, A. K. Singh, Y. Sun, L. Akoglu, D. Gunopulos, X. Yan, R. Kumar, F. Ozcan, and J. Ye, Eds. ACM, 2023, pp. 4549–4560.
2308.07107#226
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07107
227
[212] I. Gur, H. Furuta, A. Huang, M. Safdari, Y. Matsuo, D. Eck, and A. Faust, “A real-world webagent with planning, long context understanding, and program synthesis,” CoRR, vol. abs/2307.12856, 2023. J. Aslanides, H. F. Song, M. J. Chadwick, M. Glaese, S. Young, L. Campbell-Gillingham, G. Irving, and N. McAleese, “Teaching language models to support answers with verified quotes,” CoRR, vol. abs/2203.11147, 2022. [214] X. Shi, J. Liu, Y. Liu, Q. Cheng, and W. Lu, “Know where to go: Make LLM a relevant, responsible, and trustworthy searcher,” CoRR, vol. abs/2310.12443, 2023.
2308.07107#227
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07107
228
[215] Y. Qin, Z. Cai, D. Jin, L. Yan, S. Liang, K. Zhu, Y. Lin, X. Han, N. Ding, H. Wang, R. Xie, F. Qi, Z. Liu, M. Sun, and J. Zhou, “Webcpm: Interactive web search for chinese long-form question answering,” in Proceed- ings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, A. Rogers, J. L. Boyd-Graber, and N. Okazaki, Eds. Association for Computational Linguistics, 2023, pp. 8968–8988. [216] X. Deng, Y. Gu, B. Zheng, S. Chen, S. Stevens, B. Wang, H. Sun, and Y. Su, “Mind2web: Towards a generalist agent for the web,” CoRR, vol. abs/2306.06070, 2023. [217] S. Yao, H. Chen, J. Yang, and K. Narasimhan, “Web- shop: Towards scalable real-world web interaction with grounded language agents,” in NeurIPS, 2022.
2308.07107#228
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07107
229
[218] S. Zhou, F. F. Xu, H. Zhu, X. Zhou, R. Lo, A. Sridhar, X. Cheng, Y. Bisk, D. Fried, U. Alon, and G. Neubig, “Webarena: A realistic web environment for build- ing autonomous agents,” CoRR, vol. abs/2307.13854, 2023. [219] R. Lo, A. Sridhar, F. F. Xu, H. Zhu, and S. Zhou, “Hierarchical prompting assists large language model on web navigation,” in Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, De- cember 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 10 217–10 244. [220] S. MacAvaney, C. Macdonald, R. Murray-Smith, and I. Ounis, “Intent5: Search result diversification using causal language models,” CoRR, vol. abs/2108.04026, 2021. 32
2308.07107#229
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07107
230
32 [221] N. Craswell, “Mean reciprocal rank,” in Encyclopedia ¨Ozsu, Eds. of Database Systems, L. Liu and M. T. Springer US, 2009, p. 1703. [222] K. J¨arvelin and J. Kek¨al¨ainen, “Cumulated gain-based evaluation of IR techniques,” ACM Trans. Inf. Syst., vol. 20, no. 4, pp. 422–446, 2002. [223] K. Papineni, S. Roukos, T. Ward, and W. Zhu, “Bleu: a method for automatic evaluation of machine trans- lation,” in Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA. ACL, 2002, pp. 311–318. [224] C.-Y. Lin, “ROUGE: A package for automatic evalu- ation of summaries,” in Text Summarization Branches Out. Barcelona, Spain: Association for Computational Linguistics, Jul. 2004, pp. 74–81.
2308.07107#230
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07107
231
[225] P. Manakul, A. Liusie, and M. J. F. Gales, “Selfcheck- gpt: Zero-resource black-box hallucination detection for generative large language models,” CoRR, vol. abs/2303.08896, 2023. [226] H. Qian, Y. Zhu, Z. Dou, H. Gu, X. Zhang, Z. Liu, R. Lai, Z. Cao, J. Nie, and J. Wen, “Webbrain: Learn- ing to generate factually correct articles for queries by grounding on large web corpus,” CoRR, vol. abs/2304.04358, 2023. [227] J. Li, X. Cheng, W. X. Zhao, J. Nie, and J. Wen, “Halueval: A large-scale hallucination evaluation benchmark for large language models,” CoRR, vol. abs/2305.11747, 2023.
2308.07107#231
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.07107
232
[228] L. Chen, Y. Deng, Y. Bian, Z. Qin, B. Wu, T. Chua, and K. Wong, “Beyond factuality: A comprehensive evalu- ation of large language models as knowledge genera- tors,” in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 6325–6341. [229] S. Xu, D. Hou, L. Pang, J. Deng, J. Xu, H. Shen, and X. Cheng, “Ai-generated images introduce invisible relevance bias to text-image retrieval,” CoRR, vol. abs/2311.14084, 2023. [230] S. Dai, Y. Zhou, L. Pang, W. Liu, X. Hu, Y. Liu, X. Zhang, and J. Xu, “Llms may dominate informa- tion access: Neural retrievers are biased towards llm- generated texts,” CoRR, vol. abs/2310.20501, 2023.
2308.07107#232
Large Language Models for Information Retrieval: A Survey
As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field.
http://arxiv.org/pdf/2308.07107
Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen
cs.CL, cs.IR
updated to version 2
null
cs.CL
20230814
20240119
[ { "id": "2305.03195" }, { "id": "2310.09716" }, { "id": "2311.01555" }, { "id": "2312.02969" }, { "id": "2306.17563" } ]
2308.06782
0
3 2 0 2 g u A 3 1 ] E S . s c [ 1 v 2 8 7 6 0 . 8 0 3 2 : v i X r a # PENTESTGPT: An LLM-empowered Automatic Penetration Testing Tool Gelei Deng1, Yi Liu1, V´ıctor Mayoral-Vilches2,3 , Peng Liu4, Yuekang Li5, Yuan Xu 1, Tianwei Zhang1, Yang Liu1, Martin Pinzger2, and Stefan Rass6 1Nanyang Technological University, 2Alpen-Adria-Universit¨at Klagenfurt, 3Alias Robotics, 4Instituite for Infocomm Research, A*STAR, 5University of New South Wales, 6Johannes Kepler University Linz {gelei.deng, yi009, xu.yuan, tianwei.zhang, yangliu}@ntu.edu.sg, [email protected] liu [email protected], [email protected], [email protected]
2308.06782#0
PentestGPT: An LLM-empowered Automatic Penetration Testing Tool
Penetration testing, a crucial industrial practice for ensuring system security, has traditionally resisted automation due to the extensive expertise required by human professionals. Large Language Models (LLMs) have shown significant advancements in various domains, and their emergent abilities suggest their potential to revolutionize industries. In this research, we evaluate the performance of LLMs on real-world penetration testing tasks using a robust benchmark created from test machines with platforms. Our findings reveal that while LLMs demonstrate proficiency in specific sub-tasks within the penetration testing process, such as using testing tools, interpreting outputs, and proposing subsequent actions, they also encounter difficulties maintaining an integrated understanding of the overall testing scenario. In response to these insights, we introduce PentestGPT, an LLM-empowered automatic penetration testing tool that leverages the abundant domain knowledge inherent in LLMs. PentestGPT is meticulously designed with three self-interacting modules, each addressing individual sub-tasks of penetration testing, to mitigate the challenges related to context loss. Our evaluation shows that PentestGPT not only outperforms LLMs with a task-completion increase of 228.6\% compared to the \gptthree model among the benchmark targets but also proves effective in tackling real-world penetration testing challenges. Having been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and fostered active community engagement, attesting to its value and impact in both the academic and industrial spheres.
http://arxiv.org/pdf/2308.06782
Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass
cs.SE, cs.CR
null
null
cs.SE
20230813
20230813
[ { "id": "2305.13860" }, { "id": "2302.04023" }, { "id": "2206.07682" }, { "id": "2305.13534" }, { "id": "2303.18223" }, { "id": "2303.08896" }, { "id": "1905.05965" }, { "id": "2304.01852" } ]
2308.06782
1
liu [email protected], [email protected], [email protected] Abstract—Penetration testing, a crucial industrial practice for ensuring system security, has traditionally resisted automation due to the extensive expertise required by human profes- sionals. Large Language Models (LLMs) have shown signif- icant advancements in various domains, and their emergent abilities suggest their potential to revolutionize industries. In this research, we evaluate the performance of LLMs on real- world penetration testing tasks using a robust benchmark created from test machines with platforms. Our findings reveal that while LLMs demonstrate proficiency in specific sub-tasks within the penetration testing process, such as using testing tools, interpreting outputs, and proposing subsequent actions, they also encounter difficulties maintaining an integrated un- derstanding of the overall testing scenario.
2308.06782#1
PentestGPT: An LLM-empowered Automatic Penetration Testing Tool
Penetration testing, a crucial industrial practice for ensuring system security, has traditionally resisted automation due to the extensive expertise required by human professionals. Large Language Models (LLMs) have shown significant advancements in various domains, and their emergent abilities suggest their potential to revolutionize industries. In this research, we evaluate the performance of LLMs on real-world penetration testing tasks using a robust benchmark created from test machines with platforms. Our findings reveal that while LLMs demonstrate proficiency in specific sub-tasks within the penetration testing process, such as using testing tools, interpreting outputs, and proposing subsequent actions, they also encounter difficulties maintaining an integrated understanding of the overall testing scenario. In response to these insights, we introduce PentestGPT, an LLM-empowered automatic penetration testing tool that leverages the abundant domain knowledge inherent in LLMs. PentestGPT is meticulously designed with three self-interacting modules, each addressing individual sub-tasks of penetration testing, to mitigate the challenges related to context loss. Our evaluation shows that PentestGPT not only outperforms LLMs with a task-completion increase of 228.6\% compared to the \gptthree model among the benchmark targets but also proves effective in tackling real-world penetration testing challenges. Having been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and fostered active community engagement, attesting to its value and impact in both the academic and industrial spheres.
http://arxiv.org/pdf/2308.06782
Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass
cs.SE, cs.CR
null
null
cs.SE
20230813
20230813
[ { "id": "2305.13860" }, { "id": "2302.04023" }, { "id": "2206.07682" }, { "id": "2305.13534" }, { "id": "2303.18223" }, { "id": "2303.08896" }, { "id": "1905.05965" }, { "id": "2304.01852" } ]
2308.06782
2
In response to these insights, we introduce PENTEST- GPT, an LLM-empowered automatic penetration testing tool that leverages the abundant domain knowledge inherent in LLMs. PENTESTGPT is meticulously designed with three self-interacting modules, each addressing individual sub-tasks of penetration testing, to mitigate the challenges related to context loss. Our evaluation shows that PENTESTGPT not only outperforms LLMs with a task-completion increase of 228.6% compared to the GPT-3.5 model among the benchmark targets but also proves effective in tackling real-world penetration testing challenges. Having been open-sourced on GitHub, PEN- TESTGPT has garnered over 4,700 stars and fostered active community engagement, attesting to its value and impact in both the academic and industrial spheres. Index Terms—security, offensive, cybersecurity, pentesting # 1. Introduction attempt breaches of an organization’s defenses to uncover vulnerabilities. They offer marked advantages over tradi- tional defensive mechanisms, reliant on incomplete system knowledge and modeling. Guided by the principle “the best defense is a good offense”, this study focuses on offensive strategies, particularly penetration testing.
2308.06782#2
PentestGPT: An LLM-empowered Automatic Penetration Testing Tool
Penetration testing, a crucial industrial practice for ensuring system security, has traditionally resisted automation due to the extensive expertise required by human professionals. Large Language Models (LLMs) have shown significant advancements in various domains, and their emergent abilities suggest their potential to revolutionize industries. In this research, we evaluate the performance of LLMs on real-world penetration testing tasks using a robust benchmark created from test machines with platforms. Our findings reveal that while LLMs demonstrate proficiency in specific sub-tasks within the penetration testing process, such as using testing tools, interpreting outputs, and proposing subsequent actions, they also encounter difficulties maintaining an integrated understanding of the overall testing scenario. In response to these insights, we introduce PentestGPT, an LLM-empowered automatic penetration testing tool that leverages the abundant domain knowledge inherent in LLMs. PentestGPT is meticulously designed with three self-interacting modules, each addressing individual sub-tasks of penetration testing, to mitigate the challenges related to context loss. Our evaluation shows that PentestGPT not only outperforms LLMs with a task-completion increase of 228.6\% compared to the \gptthree model among the benchmark targets but also proves effective in tackling real-world penetration testing challenges. Having been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and fostered active community engagement, attesting to its value and impact in both the academic and industrial spheres.
http://arxiv.org/pdf/2308.06782
Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass
cs.SE, cs.CR
null
null
cs.SE
20230813
20230813
[ { "id": "2305.13860" }, { "id": "2302.04023" }, { "id": "2206.07682" }, { "id": "2305.13534" }, { "id": "2303.18223" }, { "id": "2303.08896" }, { "id": "1905.05965" }, { "id": "2304.01852" } ]
2308.06782
3
Penetration testing [2] is a proactive offensive technique aiming at identifying, assessing, and mitigating as many security vulnerabilities as possible. This involves executing targeted attacks to confirm diverse flaws (e.g., erratic behav- iors) and is efficacious in creating a comprehensive inven- tory of vulnerabilities complemented by actionable enhance- ment recommendations. As a widely-employed practice for security appraisal, penetration testing empowers organiza- tions to discern and neutralize potential vulnerabilities in their networks and systems before exploitation by malicious entities. Despite its significance, the industry often leans on manual techniques and specialized knowledge [3], making it labor-intensive. This has generated a gap in responding to the escalating demand for adept and efficient security evaluations. Recently Large Language Models (LLMs) [4], [5] are making striking progress, exhibiting an increasingly nuanced understanding of human-like text and effectively executing various tasks across diverse domains. One intriguing aspect of LLMs is their emergent abilities [6], which are not explic- itly programmed but arise during the training process. These abilities enable LLMs to perform complex tasks such as reasoning, summarization, question-answering, and domain- specific problem-solving without requiring specialized train- ing. Such capabilities indicate the transformative potential of LLMs across various sectors, including cybersecurity. A critical question thus emerges: can LLMs be leveraged in cybersecurity, particularly for performing automated pene- tration testing?
2308.06782#3
PentestGPT: An LLM-empowered Automatic Penetration Testing Tool
Penetration testing, a crucial industrial practice for ensuring system security, has traditionally resisted automation due to the extensive expertise required by human professionals. Large Language Models (LLMs) have shown significant advancements in various domains, and their emergent abilities suggest their potential to revolutionize industries. In this research, we evaluate the performance of LLMs on real-world penetration testing tasks using a robust benchmark created from test machines with platforms. Our findings reveal that while LLMs demonstrate proficiency in specific sub-tasks within the penetration testing process, such as using testing tools, interpreting outputs, and proposing subsequent actions, they also encounter difficulties maintaining an integrated understanding of the overall testing scenario. In response to these insights, we introduce PentestGPT, an LLM-empowered automatic penetration testing tool that leverages the abundant domain knowledge inherent in LLMs. PentestGPT is meticulously designed with three self-interacting modules, each addressing individual sub-tasks of penetration testing, to mitigate the challenges related to context loss. Our evaluation shows that PentestGPT not only outperforms LLMs with a task-completion increase of 228.6\% compared to the \gptthree model among the benchmark targets but also proves effective in tackling real-world penetration testing challenges. Having been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and fostered active community engagement, attesting to its value and impact in both the academic and industrial spheres.
http://arxiv.org/pdf/2308.06782
Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass
cs.SE, cs.CR
null
null
cs.SE
20230813
20230813
[ { "id": "2305.13860" }, { "id": "2302.04023" }, { "id": "2206.07682" }, { "id": "2305.13534" }, { "id": "2303.18223" }, { "id": "2303.08896" }, { "id": "1905.05965" }, { "id": "2304.01852" } ]
2308.06782
4
Guaranteeing a system’s immunity to potential attacks is a formidable challenge. Offensive security methods, such as penetration testing (pen-testing) or red teaming, have become essential in the security lifecycle. As detailed by Applebaum [1], these methods require security teams to to evaluate the capabilities of LLMs on real-world penetration testing tasks. Unfortunately, the current benchmarks for penetration testing [7], [8] are not comprehensive and fail to assess 1 O00 exploit flow graph adapters models state User programatically in Python 1. ExploitFlow Target parsing reasoning generation g o a l d e s c r i p ti o n i n exchange exploittree B e n c h m a r k s t e x t a n 2. PentestGPT 2. PentestGPT e x p l o i t External entity fl o w Other future papers This paper 4. Malism Inner Component 3. PentestPerf
2308.06782#4
PentestGPT: An LLM-empowered Automatic Penetration Testing Tool
Penetration testing, a crucial industrial practice for ensuring system security, has traditionally resisted automation due to the extensive expertise required by human professionals. Large Language Models (LLMs) have shown significant advancements in various domains, and their emergent abilities suggest their potential to revolutionize industries. In this research, we evaluate the performance of LLMs on real-world penetration testing tasks using a robust benchmark created from test machines with platforms. Our findings reveal that while LLMs demonstrate proficiency in specific sub-tasks within the penetration testing process, such as using testing tools, interpreting outputs, and proposing subsequent actions, they also encounter difficulties maintaining an integrated understanding of the overall testing scenario. In response to these insights, we introduce PentestGPT, an LLM-empowered automatic penetration testing tool that leverages the abundant domain knowledge inherent in LLMs. PentestGPT is meticulously designed with three self-interacting modules, each addressing individual sub-tasks of penetration testing, to mitigate the challenges related to context loss. Our evaluation shows that PentestGPT not only outperforms LLMs with a task-completion increase of 228.6\% compared to the \gptthree model among the benchmark targets but also proves effective in tackling real-world penetration testing challenges. Having been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and fostered active community engagement, attesting to its value and impact in both the academic and industrial spheres.
http://arxiv.org/pdf/2308.06782
Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass
cs.SE, cs.CR
null
null
cs.SE
20230813
20230813
[ { "id": "2305.13860" }, { "id": "2302.04023" }, { "id": "2206.07682" }, { "id": "2305.13534" }, { "id": "2303.18223" }, { "id": "2303.08896" }, { "id": "1905.05965" }, { "id": "2304.01852" } ]
2308.06782
5
Figure 1: Architecture of our framework to develop a fully automated penetration testing tools, MALISM. Figure depicts the various interaction flows that an arbitrary User could follow using MALISM to pentest a given Target. 1. Corresponds with EXPLOITFLOW, a modular library to produce security exploitation routes (exploit flows) that caputures the state of the system being tested in a flow after every discrete action. 2. (this paper) Corresponds with PENTESTGPT, a testing tool that leverages the power of LLMs to produce testing guidance (heuristics) for every given discrete state. 3. PENTESTPERFis a comprehensive penetration testing benchmark to evaluate the performances of penetration testers and automated tools across a wide array of testing targets. 4. captures MALISM, our framework to develop fully automated penetration testing tools which we name cybersecurity cognitive engines. progressive accomplishments fairly during the process. To address this limitation, we construct a robust benchmark that includes test machines from HackTheBox [9] and VulnHub [10]—two leading platforms for penetration test- ing challenges. Comprising 13 targets with 182 sub-tasks, our benchmark encompasses all vulnerabilities appearing in OWASP’s top 10 vulnerability list [11]. Also, it offers a more detailed evaluation of the tester’s performance by monitoring the completion status for each sub-task.
2308.06782#5
PentestGPT: An LLM-empowered Automatic Penetration Testing Tool
Penetration testing, a crucial industrial practice for ensuring system security, has traditionally resisted automation due to the extensive expertise required by human professionals. Large Language Models (LLMs) have shown significant advancements in various domains, and their emergent abilities suggest their potential to revolutionize industries. In this research, we evaluate the performance of LLMs on real-world penetration testing tasks using a robust benchmark created from test machines with platforms. Our findings reveal that while LLMs demonstrate proficiency in specific sub-tasks within the penetration testing process, such as using testing tools, interpreting outputs, and proposing subsequent actions, they also encounter difficulties maintaining an integrated understanding of the overall testing scenario. In response to these insights, we introduce PentestGPT, an LLM-empowered automatic penetration testing tool that leverages the abundant domain knowledge inherent in LLMs. PentestGPT is meticulously designed with three self-interacting modules, each addressing individual sub-tasks of penetration testing, to mitigate the challenges related to context loss. Our evaluation shows that PentestGPT not only outperforms LLMs with a task-completion increase of 228.6\% compared to the \gptthree model among the benchmark targets but also proves effective in tackling real-world penetration testing challenges. Having been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and fostered active community engagement, attesting to its value and impact in both the academic and industrial spheres.
http://arxiv.org/pdf/2308.06782
Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass
cs.SE, cs.CR
null
null
cs.SE
20230813
20230813
[ { "id": "2305.13860" }, { "id": "2302.04023" }, { "id": "2206.07682" }, { "id": "2305.13534" }, { "id": "2303.18223" }, { "id": "2303.08896" }, { "id": "1905.05965" }, { "id": "2304.01852" } ]
2308.06782
6
Armed with this benchmark, we conduct an exploratory study using GPT-3.5 [12], GPT-4 [13], and Bard [14] as representative LLMs. We interactively test these models by guiding them to complete the penetration tasks against our benchmark targets. This interaction involves setting a penetration testing goal for the LLM, soliciting it for the appropriate operation to execute, implementing it in the testing environment, and feeding the test outputs back to the LLM for next-step reasoning (Figure 2). By repeating this cycle, we derive the final penetration testing results. To evaluate the performance of the LLMs, we compare their results against baseline solutions provided by offi- cial walkthroughs and solutions from certified penetration testers. By analyzing similarities and differences in their problem-solving approaches, we aim to better understand LLMs’ penetration testing capabilities and discern how their problem-solving strategies diverge from those of human # experts.
2308.06782#6
PentestGPT: An LLM-empowered Automatic Penetration Testing Tool
Penetration testing, a crucial industrial practice for ensuring system security, has traditionally resisted automation due to the extensive expertise required by human professionals. Large Language Models (LLMs) have shown significant advancements in various domains, and their emergent abilities suggest their potential to revolutionize industries. In this research, we evaluate the performance of LLMs on real-world penetration testing tasks using a robust benchmark created from test machines with platforms. Our findings reveal that while LLMs demonstrate proficiency in specific sub-tasks within the penetration testing process, such as using testing tools, interpreting outputs, and proposing subsequent actions, they also encounter difficulties maintaining an integrated understanding of the overall testing scenario. In response to these insights, we introduce PentestGPT, an LLM-empowered automatic penetration testing tool that leverages the abundant domain knowledge inherent in LLMs. PentestGPT is meticulously designed with three self-interacting modules, each addressing individual sub-tasks of penetration testing, to mitigate the challenges related to context loss. Our evaluation shows that PentestGPT not only outperforms LLMs with a task-completion increase of 228.6\% compared to the \gptthree model among the benchmark targets but also proves effective in tackling real-world penetration testing challenges. Having been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and fostered active community engagement, attesting to its value and impact in both the academic and industrial spheres.
http://arxiv.org/pdf/2308.06782
Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass
cs.SE, cs.CR
null
null
cs.SE
20230813
20230813
[ { "id": "2305.13860" }, { "id": "2302.04023" }, { "id": "2206.07682" }, { "id": "2305.13534" }, { "id": "2303.18223" }, { "id": "2303.08896" }, { "id": "1905.05965" }, { "id": "2304.01852" } ]
2308.06782
7
# experts. Our investigation yields intriguing insights into the capa- bilities and limitations of LLMs in penetration testing. We discover that LLMs demonstrate proficiency in managing specific sub-tasks within the testing process, such as utiliz- ing testing tools, interpreting their outputs, and suggesting subsequent actions. Compared to human experts, LLMs are especially adept at executing complex commands and options with testing tools, while models like GPT-4 excel in comprehending source code and pinpointing vulnerabilities. Furthermore, LLMs can craft appropriate test commands and accurately describe graphical user-interface operations needed for specific tasks. Leveraging their vast knowledge base, they can design inventive testing procedures to un- veil potential vulnerabilities in real-world systems and CTF challenges. However, we also note that LLMs have difficulty in maintaining a coherent grasp of the overarching testing scenario, a vital aspect for attaining the testing goal. As the dialogue advances, they may lose sight of earlier discoveries and struggle to apply their reasoning consistently toward the final objective. Additionally, LLMs might overemphasize recent tasks in the conversation history, regardless of their vulnerability status. As a result, they tend to neglect other potential attack surfaces exposed in prior tests and fail to complete the penetration testing task. The outcomes of our empirical study are promising, re2
2308.06782#7
PentestGPT: An LLM-empowered Automatic Penetration Testing Tool
Penetration testing, a crucial industrial practice for ensuring system security, has traditionally resisted automation due to the extensive expertise required by human professionals. Large Language Models (LLMs) have shown significant advancements in various domains, and their emergent abilities suggest their potential to revolutionize industries. In this research, we evaluate the performance of LLMs on real-world penetration testing tasks using a robust benchmark created from test machines with platforms. Our findings reveal that while LLMs demonstrate proficiency in specific sub-tasks within the penetration testing process, such as using testing tools, interpreting outputs, and proposing subsequent actions, they also encounter difficulties maintaining an integrated understanding of the overall testing scenario. In response to these insights, we introduce PentestGPT, an LLM-empowered automatic penetration testing tool that leverages the abundant domain knowledge inherent in LLMs. PentestGPT is meticulously designed with three self-interacting modules, each addressing individual sub-tasks of penetration testing, to mitigate the challenges related to context loss. Our evaluation shows that PentestGPT not only outperforms LLMs with a task-completion increase of 228.6\% compared to the \gptthree model among the benchmark targets but also proves effective in tackling real-world penetration testing challenges. Having been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and fostered active community engagement, attesting to its value and impact in both the academic and industrial spheres.
http://arxiv.org/pdf/2308.06782
Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass
cs.SE, cs.CR
null
null
cs.SE
20230813
20230813
[ { "id": "2305.13860" }, { "id": "2302.04023" }, { "id": "2206.07682" }, { "id": "2305.13534" }, { "id": "2303.18223" }, { "id": "2303.08896" }, { "id": "1905.05965" }, { "id": "2304.01852" } ]
2308.06782
8
The outcomes of our empirical study are promising, re2 vealing that LLMs possess the necessary domain knowledge to perform penetration testing tasks. In particular, they are great at providing an intuition of what to do in a given networking scenario. However, what they lack is effective guidance to carry out these tasks independently and maintain a cohesive grasp of the testing scenario. On the other hand, as investigated in a prior research publication [] focused on capturing the exploitation route (or flow) for automation. Given the complexity of the (network) state space, the state itself is not enough to reason about what are the best actions to pentest. It rapidly becomes evident that a heuristic is needed to support autonomous pentesting which helps pick actions to achieve given goals. With this understanding, we aim to contribute unlocking the potential of modern machine learning approaches and develop a fully automated penetration testing framework that helps produce cybersecu- rity cognitive engines. Our overall architecture is depicted in Figure 1, which shows our current work so far and near future planned contributions. Our proposed framework, MALISM, is designed to enable a user without in-depth security domain knowledge to produce its own cybersecurity cognitive engine that helps conduct penetration testing over an extensive range of targets. This framework comprises three primary components:
2308.06782#8
PentestGPT: An LLM-empowered Automatic Penetration Testing Tool
Penetration testing, a crucial industrial practice for ensuring system security, has traditionally resisted automation due to the extensive expertise required by human professionals. Large Language Models (LLMs) have shown significant advancements in various domains, and their emergent abilities suggest their potential to revolutionize industries. In this research, we evaluate the performance of LLMs on real-world penetration testing tasks using a robust benchmark created from test machines with platforms. Our findings reveal that while LLMs demonstrate proficiency in specific sub-tasks within the penetration testing process, such as using testing tools, interpreting outputs, and proposing subsequent actions, they also encounter difficulties maintaining an integrated understanding of the overall testing scenario. In response to these insights, we introduce PentestGPT, an LLM-empowered automatic penetration testing tool that leverages the abundant domain knowledge inherent in LLMs. PentestGPT is meticulously designed with three self-interacting modules, each addressing individual sub-tasks of penetration testing, to mitigate the challenges related to context loss. Our evaluation shows that PentestGPT not only outperforms LLMs with a task-completion increase of 228.6\% compared to the \gptthree model among the benchmark targets but also proves effective in tackling real-world penetration testing challenges. Having been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and fostered active community engagement, attesting to its value and impact in both the academic and industrial spheres.
http://arxiv.org/pdf/2308.06782
Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass
cs.SE, cs.CR
null
null
cs.SE
20230813
20230813
[ { "id": "2305.13860" }, { "id": "2302.04023" }, { "id": "2206.07682" }, { "id": "2305.13534" }, { "id": "2303.18223" }, { "id": "2303.08896" }, { "id": "1905.05965" }, { "id": "2304.01852" } ]