doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.07107 | 117 | # 8.7 Bias
Since ChatGPT was released, LLMs have drawn much at- tention from both academia and industry. The wide appli- cations of LLMs have led to a notable increase in content on the Internet that is not authored by humans but rather generated by these language models. However, as LLMs may hallucinate and generate non-factual texts, the increas- ing number of LLM-generated contents also brings worries that these contents may provide fictitious information for users across IR systems. More severely, researchers [229, 230] show that some modules in IR systems such as retriever and reranker, especially those based on neural models, may pre- fer LLM-generated documents, since their topics are more consistent and the perplexity of them are lower compared with human-written documents. The authors refer to this phenomenon as the âsource biasâ towards LLM-generated text. It is challenging but necessary to consider how to build IR systems free from this category of bias. | 2308.07107#117 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 117 | graphviz-dot 85.8 1525 0.0 0.07 35 0.0 pawn 85.42 580 0.0 0.01 3 0.0 jsoniq 75.15 1343 0.0 0.01 6 0.0 bluespec 72.38 2500 0.0 0.01 2 0.0 smali 71.38 174 0.0 0 0 0.0 krl 69.87 1879 0.0 0.02 4 0.0 maple 68.28 1311 0.0 0.01 2 0.0 unrealscript 67.67 585 0.0 0.01 1 0.0 ooc 63.19 3416 0.0 0.04 15 0.0 pure-data 62.62 603 0.0 0.01 1 0.0 xquery 61.96 2237 0.0 0.08 39 0.01 del 59.64 833 0.0 0.04 19 0.0 moonscript 59.21 1951 0.0 0.02 10 0.0 awk 57.18 2206 0.0 0.1 52 0.01 pike 52.87 1262 0.0 0.02 6 0.0 livescript 51.23 5194 0.0 0.13 63 0.01 solidity 50.86 3689 0.0 0.08 37 0.01 monkey 48.26 1367 0.0 | 2308.07124#117 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 118 | 9 CONCLUSION In this survey, we have conducted a thorough exploration of the transformative impact of LLMs on IR across various dimensions. We have organized existing approaches into distinct categories based on their functions: query rewrit- ing, retrieval, reranking, and reader modules. In the do- main of query rewriting, LLMs have demonstrated their effectiveness in understanding ambiguous or multi-faceted queries, enhancing the accuracy of intent identification. In the context of retrieval, LLMs have improved retrieval accu- racy by enabling more nuanced matching between queries and documents, considering context as well. Within the reranking realm, LLM-enhanced models consider more fine- grained linguistic nuances when re-ordering results. The incorporation of reader modules in IR systems represents a significant step towards generating comprehensive re- sponses instead of mere document lists. The integration of LLMs into IR systems has brought about a fundamental change in how users engage with information and knowl- edge. From query rewriting to retrieval, reranking, and
21
reader modules, LLMs have enriched each aspect of the IR process with advanced linguistic comprehension, semantic representation, and context-sensitive handling. As this field continues to progress, the journey of LLMs in IR portends a future characterized by more personalized, precise, and user-centric search encounters. | 2308.07107#118 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 118 | 5194 0.0 0.13 63 0.01 solidity 50.86 3689 0.0 0.08 37 0.01 monkey 48.26 1367 0.0 0.02 4 0.0 jsonld 48.01 462 0.0 0.02 6 0.0 zephir 42.68 1265 0.0 0.02 4 0.0 crystal 41.92 4217 0.0 0.35 182 0.02 rhtml 41.02 4551 0.0 0.35 135 0.02 stata 40.68 1344 0.0 0.02 10 0.0 idris 39.9 3025 0.0 0.13 38 0.01 raml 39.39 948 0.0 0.03 9 0.0 openscad 37.73 2178 0.0 0.05 21 0.0 red 35.26 1108 0.0 0.01 1 0.0 c2hs-haskell 34.47 1021 0.0 0.01 2 0.0 cycript 33.96 197 0.0 0 0 0.0 applescript 33.51 1304 0.0 0.04 19 0.0 mupad 32.49 178 0.0 0.02 4 0.0 literate-agda 31.38 567 0.0 0.01 1 0.0 boo 31.17 26289 0.0 | 2308.07124#118 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 119 | This survey focuses on reviewing recent studies of ap- plying LLMs to different IR components and using LLMs as search agents. Beyond this, a more significant problem brought by the appearance of LLMs is: is the conventional IR framework necessary in the era of LLMs? For example, traditional IR aims to return a ranking list of documents that are relevant to issued queries. However, the devel- opment of generative language models has introduced a novel paradigm: the direct generation of answers to input questions. Furthermore, according to a recent perspective paper [53], IR might evolve into a fundamental service for diverse systems. For example, in a multi-agent simulation system [231], an IR component can be used for memory recall. This implies that there will be many new challenges in future IR.
REFERENCES [1] | 2308.07107#119 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 119 | 0.0 0.02 4 0.0 literate-agda 31.38 567 0.0 0.01 1 0.0 boo 31.17 26289 0.0 0.01 2 0.0 sourcepawn 29.53 N17 0.0 0.01 3 0.0 qmake 29.51 3632 0.0 0.32 140 0.02 ragel-in-ruby-host 28.3 888 0.0 0.01 4 0.0 io 27.95 1247 0.0 0.01 4 0.0 desktop 27.65 5021 0.0 0.36 186 0.02 propeller-spin 26.77 625 0.0 0.01 1 0.0 thrift 26.75 1007 0.0 0.08 28 0.01 volt 25.05 1660 0.0 0.02 9 0.0 xproc 24.21 914 0.0 0.02 3 0.0 igor-pro 23.75 388 0.0 0.01 1 0.0 lolcode 23.74 24861 0.0 0 0 0.0 html+eex 21.41 2100 0.0 0.29 135 0.02 logtalk 20.43 1035 0.0 0.06 21 0.0 mirah 20.1 706 0.0 0.04 16 0.0 gnuplot 19.68 889 0.0 | 2308.07124#119 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 120 | Y. Wu, W. Wu, C. Xing, M. Zhou, and Z. Li, âSe- quential matching network: A new architecture for multi-turn response selection in retrieval-based chat- bots,â in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, R. Barzilay and M. Kan, Eds. Association for Computational Linguistics, 2017, pp. 496â505. [2] H. Shum, X. He, and D. Li, âFrom eliza to xiaoice: challenges and opportunities with social chatbots,â Frontiers Inf. Technol. Electron. Eng., vol. 19, no. 1, pp. 10â26, 2018. V. Karpukhin, B. Oguz, S. Min, P. S. H. Lewis, L. Wu, S. Edunov, D. Chen, and W. Yih, âDense passage retrieval for open-domain question answering,â in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, | 2308.07107#120 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07107 | 121 | for open-domain question answering,â in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, B. Webber, T. Cohn, Y. He, and Y. Liu, Eds. Association for Computational Linguis- tics, 2020, pp. 6769â6781. R. Datta, D. Joshi, J. Li, and J. Z. Wang, âImage re- trieval: Ideas, influences, and trends of the new age,â ACM Comput. Surv., vol. 40, no. 2, pp. 5:1â5:60, 2008. C. Yuan, W. Zhou, M. Li, S. Lv, F. Zhu, J. Han, and S. Hu, âMulti-hop selector network for multi- turn response selection in retrieval-based chatbots,â in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3- 7, 2019, K. Inui, J. Jiang, V. Ng, | 2308.07107#121 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 121 | graphviz-dot pawn jsoniq bluespec smali krl maple unrealscript ooc pure-data xquery dcl moonscript awk pike livescript solidity monkey jsonld zephir crystal rhtml stata idris raml openscad red c2hs-haskell cycript applescript mupad literate-agda boo sourcepawn qmake ragel-in-ruby-host io desktop propeller-spin thrift volt xproc igor-pro lolcode html+eex logtalk mirah gnuplot literate-coffeescript jflex emberscript cobol yang rebol linker-script cartocss urweb rmarkdown darcs-patch | 2308.07124#121 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07124 | 122 | 85.8 85.42 75.15 72.38 71.38 69.87 68.28 67.67 63.19 62.62 61.96 59.64 59.21 57.18 52.87 51.23 50.86 48.26 48.01 42.68 41.92 41.02 40.68 39.9 39.39 37.73 35.26 34.47 33.96 33.51 32.49 31.38 31.17 29.53 29.51 28.3 27.95 27.65 26.77 26.75 25.05 24.21 23.75 23.74 21.41 20.43 20.1 19.68 19.02 18.61 18.39 17.0 16.94 16.47 16.08 15.92 13.07 13.03 13.01
1525 580 1343 2500 174 1879 1311 585 3416 603 2237 833 1951 2206 1262 5194 3689 1367 462 1265 4217 4551 1344 3025 948 2178 1108 1021 197 1304 178 567 26289 717 3632 888 1247 5021 625 1007 1660 914 388 24861 2100 1035 706 889 1041 555 1024 24953 597 239 1604 555 304 750 80 | 2308.07124#122 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 123 | [3]
[5]
[7]
Virtual Event, March 28 - April 1, 2021, Proceedings, Part I, ser. Lecture Notes in Computer Science, D. Hiem- stra, M. Moens, J. Mothe, R. Perego, M. Potthast, and Springer, 2021, pp. F. Sebastiani, Eds., vol. 12656. 755â769. Y. Zhu, J. Nie, K. Zhou, P. Du, H. Jiang, and Z. Dou, âProactive retrieval-based chatbots based on relevant knowledge and goals,â in SIGIR â21: The 44th Inter- national ACM SIGIR Conference on Research and Devel- opment in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, F. Diaz, C. Shah, T. Suel, P. Castells, R. Jones, and T. Sakai, Eds. ACM, 2021, pp. 2000â 2004. | 2308.07107#123 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07107 | 124 | [8] H. Qian, Z. Dou, Y. Zhu, Y. Ma, and J. Wen, âLearning implicit user profiles for personalized retrieval-based chatbot,â CoRR, vol. abs/2108.07935, 2021. Y. Qu, Y. Ding, J. Liu, K. Liu, R. Ren, W. X. Zhao, D. Dong, H. Wu, and H. Wang, âRocketqa: An opti- mized training approach to dense passage retrieval for open-domain question answering,â in Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, NAACL-HLT 2021, Online, June 6- 11, 2021, K. Toutanova, A. Rumshisky, L. Zettlemoyer, D. Hakkani-T ¨ur, I. Beltagy, S. Bethard, R. Cotterell, T. Chakraborty, and Y. Zhou, Eds. Association for Computational Linguistics, 2021, pp. 5835â5847. [10] Y. Arens, C. A. | 2308.07107#124 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 124 | 0.07 0.01 0.01 0.01 0 0.02 0.01 0.01 0.04 0.01 0.08 0.04 0.02 0.1 0.02 0.13 0.08 0.02 0.02 0.02 0.35 0.35 0.02 0.13 0.03 0.05 0.01 0.01 0 0.04 0.02 0.01 0.01 0.01 0.32 0.01 0.01 0.36 0.01 0.08 0.02 0.02 0.01 0 0.29 0.06 0.04 0.03 0.05 0.01 0.02 0 0.02 0.01 0.08 0.01 0.02 0 0
35 3 6 2 0 4 2 1 15 1 39 19 10 52 6 63 37 4 6 4 182 135 10 38 9 21 1 2 0 19 4 1 2 3 140 4 4 186 1 28 9 3 1 0 135 21 16 17 19 1 7 0 6 3 37 3 6 0 0
25 | 2308.07124#124 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 125 | Zhou, Eds. Association for Computational Linguistics, 2021, pp. 5835â5847. [10] Y. Arens, C. A. Knoblock, and W. Shen, âQuery re- formulation for dynamic information integration,â J. Intell. Inf. Syst., vol. 6, no. 2/3, pp. 99â130, 1996. J. Huang and E. N. Efthimiadis, âAnalyzing and eval- uating query reformulation strategies in web search logs,â in Proceedings of the 18th ACM Conference on Information and Knowledge Management, CIKM 2009, Hong Kong, China, November 2-6, 2009, D. W. Cheung, I. Song, W. W. Chu, X. Hu, and J. Lin, Eds. ACM, 2009, pp. 77â86. | 2308.07107#125 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 126 | csound squirrel apl his] latte pony ioke hy uno pan xojo papyrus stan slash supercollider vel smt glyph wisp renpy clips dns-zone sas rouge ec dylan tcesh aspectj netlogo gap fancy coq click capn-proto flux forth ats netlinx clean parrot-assembly alloy Ife gdscript augeas sparql lilypond scilab autoit 2.85 2.84 2.56 217 1.89 1.84 0.86 0.51 0.36 0.34 0.31 0.26 0.25 9.9 9.8 9.46 9.03 8.95 8.74 8.3 7.73 7.56 7.54 72 7.03 6.82 6.52 6.33 6.3 6.1 5.95 5.74 5.74 5.64 5.57 5.51 5.42 5.17 5.07 4.66 4.64 4.58 4.49 4.44 4.31 4.09 4.06 229 531 586 1529 1380 624 373 879 628 637 642 130 540 640 318 747 117 262 421 450 54 269 396 94 280 748 451 140 46 675 330 47 265 383 144 171 227 203 287 460 395 1036 265 375 279 0.01 0.01 | 2308.07124#126 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 127 | [14] Y. Zhu, J. Nie, Z. Dou, Z. Ma, X. Zhang, P. Du, X. Zuo, and H. Jiang, âContrastive learning of user behavior sequence for context-aware document ranking,â in CIKM â21: The 30th ACM International Conference on Information and Knowledge Management, Virtual Event, Queensland, Australia, November 1 - 5, 2021, G. De- martini, G. Zuccon, J. S. Culpepper, Z. Huang, and H. Tong, Eds. ACM, 2021, pp. 2780â2791. J. Teevan, S. T. Dumais, and E. Horvitz, âPersonalizing search via automated analysis of interests and activ- ities,â in SIGIR 2005: Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Salvador, Brazil, August 15-19, 2005, R. A. Baeza-Yates, N. Ziviani, G. Marchionini, A. Moffat, and J. Tait, Eds. ACM, 2005, pp. 449â456.
[15]
22 | 2308.07107#127 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07107 | 128 | [15]
22
[16] P. N. Bennett, R. W. White, W. Chu, S. T. Dumais, P. Bailey, F. Borisyuk, and X. Cui, âModeling the impact of short- and long-term behavior on search personalization,â in The 35th International ACM SIGIR conference on research and development in Information Retrieval, SIGIR â12, Portland, OR, USA, August 12-16, 2012, W. R. Hersh, J. Callan, Y. Maarek, and M. Sander- son, Eds. ACM, 2012, pp. 185â194. | 2308.07107#128 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07107 | 129 | [17] S. Ge, Z. Dou, Z. Jiang, J. Nie, and J. Wen, âPerson- alizing search results using hierarchical RNN with query-aware attention,â in Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM 2018, Torino, Italy, October 22-26, 2018, A. Cuzzocrea, J. Allan, N. W. Paton, D. Sri- vastava, R. Agrawal, A. Z. Broder, M. J. Zaki, K. S. Candan, A. Labrinidis, A. Schuster, and H. Wang, Eds. ACM, 2018, pp. 347â356. | 2308.07107#129 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 129 | 12.85 12.84 12.56 12.17 11.89 11.84 10.86 10.51 10.36 10.34 10.31 10.26 10.25 9.9 9.8 9.46 9.03 8.95 8.74 8.3 7.73 7.56 7.54 7.2 7.03 6.82 6.52 6.33 6.3 6.1 5.95 5.74 5.74 5.64 5.57 5.51 5.42 5.17 5.07 4.66 4.64 4.58 4.49 4.44 4.4 4.31 4.09 4.06 3.86 3.74 3.42 3.34 3.17 3.16 3.03 2.85 2.8 2.68 2.58
229 531 586 1529 1380 624 373 879 628 637 642 130 540 640 318 747 117 7 262 421 450 54 269 396 94 280 748 451 140 46 675 80 9 330 47 265 383 144 171 227 203 287 460 395 1036 265 375 279 105 220 337 107 513 211 414 414 47 74 601 | 2308.07124#129 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 130 | [18] Y. Zhou, Z. Dou, Y. Zhu, and J. Wen, âPSSL: self- supervised learning for personalized search with con- trastive sampling,â in CIKM â21: The 30th ACM Inter- national Conference on Information and Knowledge Man- agement, Virtual Event, Queensland, Australia, November 1 - 5, 2021, G. Demartini, G. Zuccon, J. S. Culpepper, Z. Huang, and H. Tong, Eds. ACM, 2021, pp. 2749â 2758. J. G. Carbonell and J. Goldstein, âThe use of mmr, diversity-based reranking for reordering documents and producing summaries,â in SIGIR â98: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, August 24-28 1998, Melbourne, Australia, W. B. Croft, A. Moffat, C. J. van Rijsbergen, R. Wilkinson, and J. Zobel, Eds. ACM, 1998, pp. 335â336.
[19] | 2308.07107#130 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07107 | 131 | [19]
[20] R. Agrawal, S. Gollapudi, A. Halverson, and S. Ieong, âDiversifying search results,â in Proceedings of the Sec- ond International Conference on Web Search and Web Data Mining, WSDM 2009, Barcelona, Spain, February 9-11, 2009, R. Baeza-Yates, P. Boldi, B. A. Ribeiro-Neto, and B. B. Cambazoglu, Eds. ACM, 2009, pp. 5â14. J. Liu, Z. Dou, X. Wang, S. Lu, and J. Wen, âDVGAN: A minimax game for search result diversification com- bining explicit and implicit features,â in Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, J. X. Huang, Y. Chang, X. Cheng, J. Kamps, V. Murdock, J. Wen, and Y. Liu, Eds. ACM, 2020, pp. 479â488.
[21] | 2308.07107#131 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07107 | 133 | J. Hoffmann, T. Cai, E. Rutherford, K. Millican, G. van den Driessche, J. Lespiau, B. Damoc, A. Clark, D. de Las Casas, A. Guy, J. Menick, R. Ring, T. Hennigan, S. Huang, L. Maggiore, C. Jones, A. Cassirer, A. Brock, M. Pa- ganini, G. Irving, O. Vinyals, S. Osindero, K. Si- monyan, J. W. Rae, E. Elsen, and L. Sifre, âImproving language models by retrieving from trillions of tokens,â in International Conference on Machine Learn- ing, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, ser. Proceedings of Machine Learning Research, K. Chaudhuri, S. Jegelka, L. Song, C. Szepesv´ari, G. Niu, and S. Sabato, Eds., vol. 162. PMLR, 2022, pp. 2206â2240. | 2308.07107#133 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 133 | nu bro xc J metal mms webidl tea redcode shen pov-ray-sdl x10 brainfuck ninja golo webassembly self labview octave pogoscript d http ecl chuck gosu parrot opal objective-j it gams prolog clarion mask brightscript scaml matlab idl ags-script lookml apacheconf oxygene txl gf renderscript mtml unified-parallel-c dogescript gentoo-eclass 2.38 2.34 2.02 1.81 1.72 1.54 1.51 1.47 1.27 1.2 1.14 1.01 0.96 0.95 0.9 0.86 0.82 0.81 0.8 0.8 0.8 0.74 0.66 0.58 0.52 0.52 0.47 0.46 0.41 0.38 0.28 0.27 0.25 0.24 0.18 0.16 0.15 0.12 0.12 0.11 0.1 0.1 0.09 0.06 0.05 0.05 0.04 0.04 170 333 88 142 151 91 96 29 149 71 104 33 167 187 115 83 15 61 12 74 20 140 99 60 17 69 37 48 18 35 13 37 28 31 29 31 10 59 39 54 13 10 0.0 | 2308.07124#133 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 134 | [24] R. Nakano, J. Hilton, S. Balaji, J. Wu, L. Ouyang, C. Kim, C. Hesse, S. Jain, V. Kosaraju, W. Saun- ders, X. Jiang, K. Cobbe, T. Eloundou, G. Krueger, K. Button, M. Knight, B. Chess, and J. Schulman, âWebgpt: Browser-assisted question-answering with human feedback,â CoRR, vol. abs/2112.09332, 2021.
[25] G. Salton and M. McGill, Introduction to Modern Infor- mation Retrieval. McGraw-Hill Book Company, 1984. [26] G. Salton, A. Wong, and C. Yang, âA vector space for automatic indexing,â Commun. ACM,
model vol. 18, no. 11, pp. 613â620, 1975. | 2308.07107#134 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07107 | 135 | model vol. 18, no. 11, pp. 613â620, 1975.
[27] F. Song and W. B. Croft, âA general language model for information retrieval,â in Proceedings of the 1999 ACM CIKM International Conference on Information and Knowledge Management, Kansas City, Missouri, USA, November 2-6, 1999. ACM, 1999, pp. 316â321. J. Martineau and T. Finin, âDelta TFIDF: an improved feature space for sentiment analysis,â in Proceedings of the Third International Conference on Weblogs and Social Media, ICWSM 2009, San Jose, California, USA, May 17- 20, 2009, E. Adar, M. Hurst, T. Finin, N. S. Glance, N. Nicolov, and B. L. Tseng, Eds. The AAAI Press, 2009.
[28] | 2308.07107#135 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 135 | nu bro xc j metal mms webidl tea redcode shen pov-ray-sdl x10 brainfuck ninja golo webassembly self labview octave pogoscript d http ecl chuck gosu parrot opal objective-j kit gams prolog clarion mask brightscript scaml matlab idl ags-script lookml apacheconf oxygene txl gf renderscript mtml unified-parallel-c dogescript gentoo-eclass zimpl irc-log fantom numpy cirru xpages nginx objdump python-traceback realbasic befunge | 2308.07124#135 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 136 | [28]
[29] S. E. Robertson, S. Walker, S. Jones, M. Hancock- Beaulieu, and M. Gatford, âOkapi at TREC-3,â in Proceedings of The Third Text REtrieval Conference, TREC 1994, Gaithersburg, Maryland, USA, November 2-4, 1994, ser. NIST Special Publication, D. K. Harman, Ed., vol. 500-225. National Institute of Standards and Technology (NIST), 1994, pp. 109â126. J. Guo, Y. Fan, Q. Ai, and W. B. Croft, âA deep relevance matching model for ad-hoc retrieval,â in Proceedings of the 25th ACM International Conference on Information and Knowledge Management, CIKM 2016, In- dianapolis, IN, USA, October 24-28, 2016, S. Mukhopad- hyay, C. Zhai, E. Bertino, F. Crestani, J. Mostafa, J. Tang, L. Si, X. Zhou, Y. Chang, Y. Li, and P. Sondhi, Eds. ACM, 2016, pp. 55â64.
[30] | 2308.07107#136 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 136 | 2.38 2.34 2.02 1.81 1.72 1.54 1.51 1.47 1.27 1.2 1.14 1.01 0.96 0.95 0.9 0.86 0.82 0.81 0.8 0.8 0.8 0.74 0.66 0.58 0.52 0.52 0.47 0.46 0.41 0.38 0.28 0.27 0.25 0.24 0.18 0.16 0.15 0.12 0.12 0.11 0.1 0.1 0.09 0.06 0.05 0.05 0.04 0.04 0.04 0.04 0.03 0.03 0.02 0.02 0.02 0.02 0.02 0.01 0.01
170 333 88 142 151 91 96 29 149 71 104 33 167 187 115 83 15 61 12 74 20 140 48 99 60 17 69 37 48 18 35 13 37 28 31 29 1 31 10 59 9 3 39 54 13 6 10 6 7 9 11 1 4 7 6 1 10 1 2 | 2308.07124#136 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 137 | [30]
[31] L. Xiong, C. Xiong, Y. Li, K. Tang, J. Liu, P. N. Bennett, J. Ahmed, and A. Overwijk, âApproximate nearest neighbor negative contrastive learning for dense text retrieval,â in 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. J. Lin, R. F. Nogueira, and A. Yates, Pretrained Trans- formers for Text Ranking: BERT and Beyond, ser. Syn- thesis Lectures on Human Language Technologies. Morgan & Claypool Publishers, 2021.
[32]
[33] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, âLanguage models are unsupervised multitask learners,â 2019.
[34] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger,
23 | 2308.07107#137 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 137 | 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.01 0.01 0 0 0.02 0.01 0.05 0 0 0 0.01 0 0.01 0.03 0 0 0 0 0 0 0 0.03 0.01 0 0 0 0 0 0 0 0 0 0.01 0 0.01 0 0 0 0 0.01 0 0 0 0 0.01 0 0 0 0 0 0 0 0 0.01 0.01 0 0 0 0 | 2308.07124#137 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 138 | 23
T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. Mc- Candlish, A. Radford, I. Sutskever, and D. Amodei, âLanguage models are few-shot learners,â in Ad- vances in Neural Information Processing Systems 33: An- nual Conference on Neural Information Processing Sys- tems 2020, NeurIPS 2020, December 6-12, 2020, virtual, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., 2020.
[35
[36 | 2308.07107#138 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 138 | 2 3 0 0 4 1 6 0 0 0 5 0 2 14 0 0 0 0 0 0 0 19 4 0 0 0 0 0 0 0 0 0 4 0 1 0 0 0 0 2 0 0 0 0 2 0 0 0 0 0 0 0 0 1 2 0 0 0 0
27
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
# OctoPack: Instruction Tuning Code Large Language Models
bison m omgrofl 0.01 0.01 0.01 1 1 1 0.0 0.0 0.0 0 0 0 0 0 0 0.0 0.0 0.0 | 2308.07124#138 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 139 | [35
[36
Izacard, X. Martinet, M. Lachaux, T. Lacroix, B. Rozi`ere, N. Goyal, E. Ham- bro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample, âLlama: Open and efficient foundation language models,â CoRR, vol. abs/2302.13971, 2023. J. Zhang, R. Xie, Y. Hou, W. X. Zhao, L. Lin, and J. Wen, âRecommendation as instruction following: A large language model empowered recommendation approach,â CoRR, vol. abs/2305.07001, 2023.
[37] Y. Hou, J. Zhang, Z. Lin, H. Lu, R. Xie, J. J. McAuley, and W. X. Zhao, âLarge language models are zero- shot rankers for recommender systems,â CoRR, vol. abs/2305.08845, 2023. | 2308.07107#139 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 139 | Table 4: Programming language distribution of COMMITPACK and COMMITPACKFT. Short- cuts: MB=Megabytes, owl=web-ontology-language, pir=parrot-internal-representation, dcl=digital- command-language, mms=module-management-system, gf=grammatical-framework
# D DATASET CREATION
COMMITPACK We use the GitHub archive available on GCP which contains metadata from GitHub commits up to 2016.4 It contains around 3TB of GitHub activity data for more than 2.8 million GitHub repositories including more than 145 million unique commits, over 2 billion different file paths and the contents of the latest revision for 163 million files.5 We apply the filters in Table 5 to this dataset. The resulting dataset containing only metadata is uploaded at https://hf.co/datasets/big code/commitpackmeta. As the activity dataset only contains commit ids without the actual code changes, we scrape the code from GitHub. We use the metadata and the GitHub API to scrape the changed file prior and after the respective commit. Some repositories referenced in the activity data are no longer accessible, thus we discard them. This results in COMMITPACK with approximately 4 terabytes uploaded at https://hf.co/datasets/bigcode/commitpack. | 2308.07124#139 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07124 | 140 | Description Details License Length Noise Single file Opt-out Only keep samples licensed as MIT, Artistic-2.0, ISC, CC0-1.0, EPL-1.0, MPL- 2.0, Apache-2.0, BSD-3-Clause, AGPL-3.0, LGPL-2.1, BSD-2-Clause or with- out license. Only keep code where the commit message has at least 5 and at most 10,000 characters Remove code where the lowercased commit message is any of âadd files via uploadâ, "canât you see iâm updating the time?", âcommitâ, âcreate readme.mdâ, âdummyâ, âfirst commitâ, âheartbeat updateâ, âinitial commitâ, âmirroring from micro.blog.â, âno messageâ, âpi pushâ, âreadmeâ, âupdateâ, âupdatesâ, âupdate _config.yamlâ, âupdate index.htmlâ, âupdate readme.mdâ, âupdate readmeâ, âup- dated readmeâ, âupdate logâ, âupdate | 2308.07124#140 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 141 | [40] S. Wu, O. Irsoy, S. Lu, V. Dabravolski, M. Dredze, S. Gehrmann, P. Kambadur, D. S. Rosenberg, and G. Mann, âBloomberggpt: A large language model for finance,â CoRR, vol. abs/2303.17564, 2023. J. Li, Y. Liu, W. Fan, X. Wei, H. Liu, J. Tang, and Q. Li, âEmpowering molecule discovery for molecule- caption translation with large language models: A chatgpt perspective,â CoRR, vol. abs/2306.06615, 2023. J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler, E. H. Chi, T. Hashimoto, O. Vinyals, P. Liang, J. Dean, and W. Fedus, âEmergent abilities of large language models,â Trans. Mach. Learn. Res., vol. 2022, 2022.
[42] | 2308.07107#141 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07107 | 142 | [43] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wain- wright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. F. Christiano, J. Leike, and R. Lowe, âTraining language mod- els to follow instructions with human feedback,â in NeurIPS, 2022. J. Wei, M. Bosma, V. Y. Zhao, K. Guu, A. W. Yu, B. Lester, N. Du, A. M. Dai, and Q. V. Le, âFine- tuned language models are zero-shot learners,â in The Tenth International Conference on Learning Repre- sentations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. H. Chi, Q. V. Le, and D. Zhou, âChain-ofthought prompting elicits reasoning in large language models,â in NeurIPS, 2022. | 2308.07107#142 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 142 | # Table 5: COMMITPACK filters.
COMMITPACKFT Prior work has shown the importance of careful data filtering to maintain quality (Yin et al., 2018; Dhole et al., 2021; Laurençon et al., 2022; Longpre et al., 2023b). To create a smaller version focused on commits that resemble high-quality instructions, we further filter COMMITPACK to create COMMITPACKFT using the steps outlined in Table 6. We also checked for any contamination with HumanEval (Chen et al., 2021) but did not find any solution or docstring present in COMMIT- PACKFT. This is likely because our commit data only goes up to 2016, which is several years prior to the release of HumanEval. Our filters reduce the dataset by a factor of around 1000 resulting in close to 2 gigabytes uploaded at https://hf.co/datasets/bigcode/commitpackft. To gain a deeper understanding of the rich content within COMMITPACKFT, we analyze commits on its Python subset (56K samples). We first collect the most prevalent commit domain by prompting GPT-4 with: "Iâd like to know the main types of commits on Github and aim to cover as comprehensively as possible.". Subsequently, we use GPT-4 to classify each sample using the prompt in Figure 5. The task distribution is visualized in Figure 2. | 2308.07124#142 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 144 | [48] Y. Cao, S. Li, Y. Liu, Z. Yan, Y. Dai, P. S. Yu, and L. Sun, âA comprehensive survey of ai-generated content (AIGC): A history of generative AI from GAN to chatgpt,â CoRR, vol. abs/2303.04226, 2023. J. Li, T. Tang, W. X. Zhao, and J. Wen, âPretrained language model for text generation: A survey,â in Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, Z. Zhou, Ed. ijcai.org, 2021, pp. 4492â4499.
[49] | 2308.07107#144 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 144 | Remove samples where the before code has more than 50,000 characters Remove samples where the after code has 0 characters Remove samples where the before and after code are the same (e.g. file name changes) Remove samples that contain a hashtag (to avoid references to issues) Remove samples where the filename of the code after has an atypical extension for the programming language (e.g. only keep â.pyâ for Python) Remove samples where the filename is contained in the commit message (as we do not use the filename in finetuning) Only keep samples where the commit message has more than 10 and less than 1000 characters Only keep samples where the commit message can be split into more than 4 and less than 1000 space-separated words Remove any appearances of â[skip ci]â, â[ci skip]â, sequences at the beginning or end that are in brackets, sequences at the beginning that end with â:â and strip whitespace at the beginning or end Only keep samples where the message starts with an uppercase letter Only keep samples where the concatenation of the code before, a special token and the code after has at least 50 tokens and at most 768 tokens according to the StarCoder tokenizer Only keep samples where the | 2308.07124#144 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 145 | [49]
[50] Q. Dong, L. Li, D. Dai, C. Zheng, Z. Wu, B. Chang, X. Sun, J. Xu, L. Li, and Z. Sui, âA survey for in-context learning,â CoRR, vol. abs/2301.00234, 2023. J. Huang and K. C. Chang, âTowards reasoning in large language models: A survey,â in Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, A. Rogers, J. L. Boyd- Graber, and N. Okazaki, Eds. Association for Com- putational Linguistics, 2023, pp. 1049â1065.
[51]
[52] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong, Y. Du, C. Yang, Y. Chen, Z. Chen, J. Jiang, R. Ren, Y. Li, X. Tang, Z. Liu, P. Liu, J. Nie, and J. Wen, âA survey of large language models,â CoRR, vol. abs/2303.18223, 2023. | 2308.07107#145 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 145 | the code before, a special token and the code after has at least 50 tokens and at most 768 tokens according to the StarCoder tokenizer Only keep samples where the lowercased commit message starts with any of the words in Table 7 Remove samples where the lowercased commit message contains any of âauto commitâ, âupdate contributingâ, â<?xmlâ, âmerge branchâ, âmerge pull requestâ, âsigned-off-byâ, "fix that bug where things didnât work but now they should", "put the thingie in the thingie", "add a beter commit message", "code review", "//codereview", "work in progress", "wip", "https://", "http://", "| leetcode", "cdpcp", " i ", "iâve" , "iâm" or both "thanks to" and "for" Remove samples where the lowercased commit message has a match for the regular expressions (?:v)?\d+\.\d+\.\d+(?=$|\S), any of ^[a-f0-9]+(?:-[a-f0-9]+)*$, ([a-f0-9]{40}), | 2308.07124#145 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 146 | [53] Q. Ai, T. Bai, Z. Cao, Y. Chang, J. Chen, Z. Chen, Z. Cheng, S. Dong, Z. Dou, F. Feng, S. Gao, J. Guo, X. He, Y. Lan, C. Li, Y. Liu, Z. Lyu, W. Ma, J. Ma, Z. Ren, P. Ren, Z. Wang, M. Wang, J. Wen, L. Wu, X. Xin, J. Xu, D. Yin, P. Zhang, F. Zhang, W. Zhang, M. Zhang, and X. Zhu, âInformation retrieval meets large language models: A strategic report from chi- nese IR community,â CoRR, vol. abs/2307.09751, 2023. [54] X. Liu and W. B. Croft, âStatistical language modeling for information retrieval,â Annu. Rev. Inf. Sci. Technol., vol. 39, no. 1, pp. 1â31, 2005.
[55] B. Mitra and N. Craswell, âNeural models for infor- mation retrieval,â CoRR, vol. abs/1705.01509, 2017. | 2308.07107#146 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07107 | 147 | [56] W. X. Zhao, J. Liu, R. Ren, and J. Wen, âDense text retrieval based on pretrained language models: A survey,â CoRR, vol. abs/2211.14876, 2022.
[57] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, âExploring the limits of transfer learning with a unified text-to- text transformer,â J. Mach. Learn. Res., vol. 21, pp. 140:1â140:67, 2020.
[58] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, âDeep contex- tualized word representations,â in Proceedings of the 2018 Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana,
24
[59] | 2308.07107#147 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07107 | 148 | 24
[59]
USA, June 1-6, 2018, Volume 1 (Long Papers), M. A. Walker, H. Ji, and A. Stent, Eds. Association for Computational Linguistics, 2018, pp. 2227â2237. J. Devlin, M. Chang, K. Lee, and K. Toutanova, âBERT: pre-training of deep bidirectional transformers for language understanding,â in Proceedings of the 2019 Conference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), J. Burstein, C. Doran, and T. Solorio, Eds. Association for Computational Linguistics, 2019, pp. 4171â4186. | 2308.07107#148 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 148 | "abortâ, âaccelerateâ, âaccessâ, âaccumulateâ, âaddâ, âaddressâ, âadjustâ, âadvanceâ, âalignâ, âal- lotâ, âallowâ, âamplifyâ, âannotateâ, âappendâ, âapplyâ, âarchiveâ, âarrangeâ, âattachâ, âaugmentâ, âautomateâ, âbackupâ, âboostâ, âbreakâ, âbringâ, âbrush upâ, âbuildâ, âbumpâ, âcallâ, âchangeâ, âcheckâ, âchooseâ, âclarifyâ, âcleanâ, âclearâ, âcloneâ, âcommentâ, âcompleteâ, âcompressâ, âcon- catenateâ, âconfigureâ, âconnectâ, âconsolidateâ, âconvertâ, âcopyâ, âcorrectâ, | 2308.07124#148 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 149 | [60] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, âAttention is all you need,â in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, and R. Garnett, Eds., 2017, pp. 5998â 6008. | 2308.07107#149 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 149 | âconfigureâ, âconnectâ, âconsolidateâ, âconvertâ, âcopyâ, âcorrectâ, âcoverâ, âcreateâ, âcustomizeâ, âcutâ, âdeal withâ, âdebugâ, âdecipherâ, âdeclareâ, âdecommissionâ, âdecomplexifyâ, âdecompressâ, âdecreaseâ, âdecryptâ, âdefineâ, âdeleteâ, âdeployâ, âdesignateâ, âdestroyâ, âdetachâ, âdetermineâ, âdevelopâ, âdiminishâ, âdisableâ, âdiscardâ, âdisentangleâ, âdismantleâ, âdivideâ, âdocumentâ, âdowngradeâ, âdropâ, âduplicateâ, âeditâ, âembedâ, âemphasizeâ, âenableâ, âencryptâ, âenforceâ, | 2308.07124#149 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 150 | [61] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mo- hamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, âBART: denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension,â in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, D. Jurafsky, J. Chai, N. Schluter, and J. R. Tetreault, Eds. Association for Computational Linguistics, 2020, pp. 7871â7880. J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei, âScaling laws for neural language mod- els,â CoRR, vol. abs/2001.08361, 2020. | 2308.07107#150 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 150 | âeditâ, âembedâ, âemphasizeâ, âenableâ, âencryptâ, âenforceâ, âenhanceâ, âenlargeâ, âenumerateâ, âeradicateâ, âescalateâ, âestablishâ, âexcludeâ, âexitâ, âexpandâ, âexpediteâ, âexpireâ, âextendâ, âfacilitateâ, âfixâ, âformatâ, âgatherâ, âgeneralizeâ, âhaltâ, âhandleâ, âhastenâ, âhideâ, âimplementâ, âimproveâ, âincludeâ, âincreaseâ, âincrementâ, âindentâ, âindexâ, âinflateâ, âinitializeâ, âinsertâ, âinstallâ, âintegrateâ, âinterpolateâ, âinterruptâ, âintroduceâ, âisolateâ, | 2308.07124#150 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 151 | [63] A. Clark, D. de Las Casas, A. Guy, A. Mensch, M. Paganini, J. Hoffmann, B. Damoc, B. A. Hecht- man, T. Cai, S. Borgeaud, G. van den Driessche, E. Rutherford, T. Hennigan, M. J. Johnson, A. Cassirer, C. Jones, E. Buchatskaya, D. Budden, L. Sifre, S. Osin- dero, O. Vinyals, M. Ranzato, J. W. Rae, E. Elsen, K. Kavukcuoglu, and K. Simonyan, âUnified scaling laws for routed language models,â in International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, ser. Proceedings of Machine Learning Research, K. Chaudhuri, S. Jegelka, L. Song, C. Szepesv´ari, G. Niu, and S. Sabato, Eds., vol. 162. PMLR, 2022, pp. 4057â4086. | 2308.07107#151 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 151 | âintegrateâ, âinterpolateâ, âinterruptâ, âintroduceâ, âisolateâ, âjoinâ, âkillâ, âleverageâ, âloadâ, âmagnifyâ, âmaintainâ, âmakeâ, âman- ageâ, âmarkâ, âmaskâ, âmendâ, âmergeâ, âmigrateâ, âmodifyâ, âmonitorâ, âmoveâ, âmultiplyâ, ânormalizeâ, âoptimizeâ, âorchestrateâ, âorderâ, âpackageâ, âparaphraseâ, âpasteâ, âpatchâ, âplug â, âprepareâ, âprependâ, âprintâ, âprovisionâ, âpurgeâ, âputâ, âquitâ, âraiseâ, âreadâ, âreannotateâ, ârearrangeâ, ârebaseâ, ârebootâ, | 2308.07124#151 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 152 | [64] L. Dong, N. Yang, W. Wang, F. Wei, X. Liu, Y. Wang, J. Gao, M. Zhou, and H. Hon, âUnified language model pre-training for natural language understand- ing and generation,â in Advances in Neural Informa- tion Processing Systems 32: Annual Conference on Neu- ral Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, H. M. Wallach, H. Larochelle, A. Beygelzimer, F. dâAlch´e- Buc, E. B. Fox, and R. Garnett, Eds., 2019, pp. 13 042â 13 054.
[65] L. Xue, N. Constant, A. Roberts, M. Kale, R. Al- Rfou, A. Siddhant, A. Barua, and C. Raffel, âmt5: A massively multilingual pre-trained text-to-text the 2021 Confer- transformer,â in Proceedings of | 2308.07107#152 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 152 | âreadâ, âreannotateâ, ârearrangeâ, ârebaseâ, ârebootâ, ârebuildâ, ârecommentâ, ârecompileâ, âreconfigureâ, âreconnectâ, ârectifyâ, âredactâ, âredefineâ, âreduceâ, ârefactorâ, âreformatâ, ârefreshâ, âreimplementâ, ârein- forceâ, ârelocateâ, âremoveâ, ârenameâ, âreorderâ, âreorganizeâ, ârepackageâ, ârepairâ, ârephraseâ, âreplaceâ, ârepositionâ, ârescheduleâ, âresetâ, âreshapeâ, âresolveâ, ârestructureâ, âreturnâ, ârevertâ, âreviseâ, ârevokeâ, ârewordâ, âreworkâ, ârewriteâ, ârollbackâ, | 2308.07124#152 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 153 | ence of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, K. Toutanova, A. Rumshisky, L. Zettlemoyer, D. Hakkani-T ¨ur, I. Beltagy, S. Bethard, R. Cotterell, T. Chakraborty, and Y. Zhou, Eds. Association for Computational Linguistics, 2021, pp. 483â498. [66] V. Sanh, A. Webson, C. Raffel, S. H. Bach, L. Sutawika, Z. Alyafeai, A. Chaffin, A. Stiegler, A. Raja, M. Dey, M. S. Bari, C. Xu, U. Thakker, S. S. Sharma, E. Szczechla, T. Kim, G. Chhablani, N. V. Nayak, D. Datta, J. Chang, M. T. Jiang, H. Wang, M. Man- ica, S. Shen, Z. X. Yong, H. Pandey, R. Bawden, T. | 2308.07107#153 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 153 | ârevokeâ, ârewordâ, âreworkâ, ârewriteâ, ârollbackâ, âsaveâ, âscaleâ, âscrubâ, âsecureâ, âselectâ, âsendâ, âsetâ, âsettleâ, âsimplifyâ, âsolveâ, âsortâ, âspeed upâ, âsplitâ, âstabilizeâ, âstandard- izeâ, âstipulateâ, âstopâ, âstoreâ, âstreamlineâ, âstrengthenâ, âstructureâ, âsubstituteâ, âsubtractâ, âsupportâ, âswapâ, âswitchâ, âsynchronizeâ, âtackleâ, âtagâ, âterminateâ, âtestâ, âthrowâ, âtidyâ, âtransformâ, âtransposeâ, âtrimâ, âtroubleshootâ, âtruncateâ, | 2308.07124#153 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 154 | Jiang, H. Wang, M. Man- ica, S. Shen, Z. X. Yong, H. Pandey, R. Bawden, T. Wang, T. Neeraj, J. Rozen, A. Sharma, A. Santilli, T. F´evry, J. A. Fries, R. Teehan, T. L. Scao, S. Bider- man, L. Gao, T. Wolf, and A. M. Rush, âMultitask prompted training enables zero-shot task generaliza- tion,â in The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. | 2308.07107#154 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 154 | âtransformâ, âtransposeâ, âtrimâ, âtroubleshootâ, âtruncateâ, âtweakâ, âunblockâ, âuncoverâ, âundoâ, âunifyâ, âuninstallâ, âunplugâ, âunpublishâ, âunravelâ, âunstageâ, âunsyncâ, âuntangleâ, âunwindâ, âupdateâ, âupgradeâ, âuseâ, âvalidateâ, âverifyâ, âwatchâ, âwatermarkâ, âwhitelistâ, âwithdrawâ, âworkâ, âwrite" | 2308.07124#154 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 155 | [67] H. Bao, L. Dong, F. Wei, W. Wang, N. Yang, X. Liu, Y. Wang, J. Gao, S. Piao, M. Zhou, and H. Hon, âUnilmv2: Pseudo-masked language models for uni- fied language model pre-training,â in Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, ser. Proceedings of Machine Learning Research, vol. 119. PMLR, 2020, pp. 642â652.
[68] A. Zeng, X. Liu, Z. Du, Z. Wang, H. Lai, M. Ding, Z. Yang, Y. Xu, W. Zheng, X. Xia, W. L. Tam, Z. Ma, Y. Xue, J. Zhai, W. Chen, Z. Liu, P. Zhang, Y. Dong, and J. Tang, âGLM-130B: an open bilingual pre-trained model,â in The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. | 2308.07107#155 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 155 | Table 7: Commit message starting words allowed in COMMITPACKFT.
Please categorize the following commit message, which may fall into more than one category.
### Category Bug fixes, New features, Refactoring/code cleanup, Documentation, Testing, User interface, Dependencies, Configuration, Build system/tooling, Performance improvements, Formatting/Linting, Security, Technical debt repayment, Release management, Accessibility, Deprecation, Logging/In- strumentation, Internationalization
### Commit Message Add the blacklist checking to the bulk
### Classification Bug fixes, New features
### Commit Message {COMMIT_MESSAGE} ### Classification
Figure 5: GPT-4 1-shot prompt for classifying commits in COMMITPACKFT.
30
# OctoPack: Instruction Tuning Code Large Language Models
xP3x We use a subset of xP3x (Muennighoff et al., 2022b) focusing on code datasets consisting of APPS (Hendrycks et al., 2021), CodeContests (Li et al., 2022b), Jupyter Code Pairs,6 MBPP (Austin et al., 2021), XLCoST (Zhu et al., 2022), Code Complex (Jeon et al., 2022), Docstring Corpus (Barone & Sennrich, 2017), Great Code (Hellendoorn et al., 2019) and State Changes.7 | 2308.07124#155 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 156 | [69] W. Fedus, B. Zoph, and N. Shazeer, âSwitch trans- formers: Scaling to trillion parameter models with simple and efficient sparsity,â J. Mach. Learn. Res., vol. 23, pp. 120:1â120:39, 2022.
[70] Z. Yang, Z. Dai, Y. Yang, J. G. Carbonell, R. Salakhutdi- nov, and Q. V. Le, âXlnet: Generalized autoregressive pretraining for language understanding,â in Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, H. M. Wallach, H. Larochelle, A. Beygelz- imer, F. dâAlch´e-Buc, E. B. Fox, and R. Garnett, Eds., 2019, pp. 5754â5764. | 2308.07107#156 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 156 | OASST We reuse a filtered variant of OASST (Köpf et al., 2023) from prior work (Dettmers et al., 2023) and apply additional filters to remove responses that refuse to comply with the user request. To compute the programming languages and code fraction for OASST depicted in Table 1, we count all responses containing e.g. âââpython or âââpy for the Python programming language. There are code samples that are not enclosed in backticks or do not specify the language, thus we are likely underestimating the actual fraction of code data for OASST in Table 1.
# E COMPARING DATA BEFORE AND AFTER FILTERING
In Table 8 we compare word statistics prior to and after filtering COMMITPACK to create COMMIT- PACKFT. The mean commit subject and message length increases suggesting that messages are more informative in COMMITPACKFT. The code lengths decrease significantly as we limit the number of allowed tokens in the filters in Table 6. Notably, the percentage of code changed between pre- and post-commit is 77.6/59.1 = 1.31 (a 31% increase) as opposed to 3269.8/3269.9 = 1.007 (a 0.7% increase). Thus, the filtered data carries significantly more signal per token with fewer repetitions of the code prior to the commit. | 2308.07124#156 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 157 | [71] S. Black, S. Biderman, E. Hallahan, Q. Anthony, L. Gao, L. Golding, H. He, C. Leahy, K. McDonell, J. Phang, M. Pieler, U. S. Prashanth, S. Purohit, L. Reynolds, J. Tow, B. Wang, and S. Weinbach, âGpt- neox-20b: An open-source autoregressive language model,â CoRR, vol. abs/2204.06745, 2022. J. W. Rae, S. Borgeaud, T. Cai, K. Millican, J. Hoff- mann, H. F. Song, J. Aslanides, S. Henderson, R. Ring, S. Young, E. Rutherford, T. Hennigan, J. Menick, A. Cassirer, R. Powell, G. van den Driessche, L. A. Hendricks, M. Rauh, P. Huang, A. Glaese, J. Welbl, S. Dathathri, S. Huang, J. Uesato, J. Mellor, I. Higgins, A. Creswell, N. McAleese, A. Wu, E. Elsen, S. M.
[72]
25 | 2308.07107#157 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 157 | Metric Before Filter After Filter Difference Subject Length (words) Message Length (words) Pre-Commit Code Length (words) Post-Commit Code Length (words) 5.7±0.02 8.7±0.06 3269.9±298.8 3269.8±299.5 6.9±0.01 9.9±0.05 59.1±0.19 77.6±0.23 +1.28 +1.34 -3210.9 -3214.2
Table 8: The effect of data filters on subject, message, and code lengths. We compare differences in word statistics of COMMITPACK and COMMITPACKFT.
# F COMPARING COMMITPACK AND THE STACK | 2308.07124#157 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 158 | Jayakumar, E. Buchatskaya, D. Budden, E. Suther- land, K. Simonyan, M. Paganini, L. Sifre, L. Martens, X. L. Li, A. Kuncoro, A. Nematzadeh, E. Gribovskaya, D. Donato, A. Lazaridou, A. Mensch, J. Lespiau, M. Tsimpoukelli, N. Grigorev, D. Fritz, T. Sotti- aux, M. Pajarskas, T. Pohlen, Z. Gong, D. Toyama, C. de Masson dâAutume, Y. Li, T. Terzi, V. Mikulik, I. Babuschkin, A. Clark, D. de Las Casas, A. Guy, C. Jones, J. Bradbury, M. J. Johnson, B. A. Hecht- man, L. Weidinger, I. Gabriel, W. Isaac, E. Lockhart, S. Osindero, L. Rimell, C. Dyer, O. Vinyals, K. Ayoub, J. Stanway, L. Bennett, D. Hassabis, K. Kavukcuoglu, and | 2308.07107#158 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 158 | # F COMPARING COMMITPACK AND THE STACK
In Table 9 we provide statistics on repositories and usernames of COMMITPACK and The Stack (Ko- cetkov et al., 2022). COMMITPACK contains a total of 1,934,255 repositories. Around half (49.3%) of them are also in The Stack. However, The Stack only provides the raw code files of these repositories from some fixed point in time. COMMITPACK contains the changes made to the code files in the form of commits. Thus, the same code file may appear multiple times in COMMITPACK for each change that was made to it. Therefore, The Stack only contains 3 terabytes of data, while COMMITPACK contains close to 4.
Statistic (â) COMMITPACK The Stack 1.2 Shared Repositories Usernames 1,934,255 825,885 18,712,378 6,434,196 954,135 663,050 49.3% 80.3%
Table 9: Overlap in repositories and usernames of COMMITPACK and The Stack.
# G PRETRAINING ON COMMITPACK
Due to the scale of COMMITPACK, it is also adequate as a large-scale pretraining dataset. We have included parts of COMMITPACK during the pretraining of StarCoder (Li et al., 2023b) in the | 2308.07124#158 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07124 | 159 | 6https://hf.co/datasets/codeparrot/github-jupyter-text-code-pairs 7https://hf.co/datasets/Fraser/python-state-changes
31
# OctoPack: Instruction Tuning Code Large Language Models
format of <commit_before>code_before<commit_msg>message<commit_after> code_after. We also pretrain a new model, named SANTACODERPACK, with the same architec- ture as SantaCoder (Allal et al., 2023) on COMMITPACK using this format. We filter COMMITPACK for our six evaluation languages and samples that fit within 8192 tokens leaving us a total of 35B tokens. Following prior work (Muennighoff et al., 2023), we train on this data repeated close to 4 times for a total of 131B tokens taking 14 days. Detailed hyperparameters are in Appendix M. | 2308.07124#159 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 160 | [73] N. Du, Y. Huang, A. M. Dai, S. Tong, D. Lepikhin, Y. Xu, M. Krikun, Y. Zhou, A. W. Yu, O. Firat, B. Zoph, L. Fedus, M. P. Bosma, Z. Zhou, T. Wang, Y. E. Wang, K. Webster, M. Pellat, K. Robinson, K. S. Meier- Hellstern, T. Duke, L. Dixon, K. Zhang, Q. V. Le, Y. Wu, Z. Chen, and C. Cui, âGlam: Efficient scaling of language models with mixture-of-experts,â in In- ternational Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, ser. Pro- ceedings of Machine Learning Research, K. Chaud- huri, S. Jegelka, L. Song, C. Szepesv´ari, G. Niu, and S. Sabato, Eds., vol. 162. PMLR, 2022, pp. 5547â5569. [74] Y. Sun, S. Wang, S. | 2308.07107#160 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 160 | In Table 10, we benchmark StarCoder and SANTACODERPACK on HUMANEVALFIX using the above-detailed commit format. We find that the commit format leads to very strong performance for StarCoder often surpassing the instruction tuned OCTOCODER from Table 2. However, this pretraining format is not suitable for HUMANEVALEXPLAIN limiting its universality. For SAN- TACODERPACK, we find performance comparable to SantaCoder, including checkpoints at 131B and 236B tokens. SANTACODERPACK performs slightly worse on Python than SantaCoder. We hypothesize that this discrepancy is due to a multilingual tax, as SANTACODERPACK needs to accommodate three additional coding languages (Go, C++ and Rust). SantaCoder has thus more capacity allocated to Python, JavaScript, and Java.
SANTACODERPACK may also be bottlenecked by its small model size of 1.1B parameters. More research into what exactly happens during pretraining (Xia et al., 2022; Biderman et al., 2023a) and how to unify pretraining and instruction tuning are needed. Prior work has also found that including raw code data during pretraining benefits some natural language tasks (Muennighoff et al., 2023). Future work may consider the effects of including code commit data on natural language tasks. | 2308.07124#160 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07124 | 161 | Model (â) Python JavaScript Java Go C++ Rust Avg. SantaCoder (131B tokens) Instruct Format SantaCoder (236B tokens) Instruct Format SANTACODERPACK (131B tokens) Commit Format 6.5 7.1 3.2 4.2 4.2 4.9 2.9 1.8 1.8 - - 3.6 - - 4.2 - - 1.7 - - 3.3 StarCoder Commit Format 32.7 33.6 33.0 31.9 31.6 20.2 30.5
Table 10: Zero-shot pass@1 (%) performance on HUMANEVALFIX of pretraining experiments.
H LINE DIFF FORMAT FOR FIXING CODE
We finetune SantaCoder to experiment with different formatting strategies for fixing bugs comparing full code generation and code diff generation. When fixing a code bug, usually only a small part of the code needs to change. Only generating the code diff corresponding to the necessary change can make inference significantly more efficient by avoiding repeated characters in the output generation. We finetune SantaCoder on the Python, Java and JavaScript subset of COMMITPACKFT. We exclude other languages as SantaCoder has only been pretrained on these three languages (Allal et al., 2023). | 2308.07124#161 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07124 | 162 | Commit Format For full code generation, we reuse the format that we employed for commits in StarCoder pretraining from Appendix G: <commit_before>code_before<commit_msg> message<commit_after>code_after. However, SantaCoder has not seen this format during pretraining and does not have special tokens like StarCoder for the delimiters. Thus, for SantaCoder e.g. <commit_before> is tokenized as [â<â, âcommitâ, â_â, âbeforeâ, â>â].
Unified diff format For code diff generation, a simple solution is using the unified diff format,8 which is a standard way to display changes between code files in a compact and readable format (Lehman et al., 2022; Jung, 2021; Xu et al., 2022b; Monperrus et al., 2021). We depict an example of this format in Figure 6. However, the unified diff format still requires the model to output several unchanged lines below and after the actual modification. Thus, its efficiency gains are limited and there is still unnecessary duplication of the input.
Line diff format To address the inefficiencies of the unified diff format, we propose the line diff format for representing code differences. There are two requirements for our format: (1) The diff | 2308.07124#162 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 163 | J. Hall, N. Shazeer, A. Kulshreshtha, H. Cheng, A. Jin, T. Bos, L. Baker, Y. Du, Y. Li, H. Lee, H. S. Zheng, A. Ghafouri, M. Menegali, Y. Huang, M. Krikun, D. Lepikhin, J. Qin, D. Chen, Y. Xu, Z. Chen, A. Roberts, M. Bosma, Y. Zhou, C. Chang, I. Krivokon, W. Rusch, M. Pick- ett, K. S. Meier-Hellstern, M. R. Morris, T. Doshi, R. D. Santos, T. Duke, J. Soraker, B. Zevenbergen, V. Prabhakaran, M. Diaz, B. Hutchinson, K. Olson, A. Molina, E. Hoffman-John, J. Lee, L. Aroyo, R. Ra- jakumar, A. Butryna, M. Lamm, V. Kuzmina, J. Fenton, A. Cohen, R. Bernstein, R. Kurzweil, B. A. y Arcas, C. | 2308.07107#163 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 163 | Line diff format To address the inefficiencies of the unified diff format, we propose the line diff format for representing code differences. There are two requirements for our format: (1) The diff
# 8https://en.wikipedia.org/wiki/Diff#Unified_format
32
# OctoPack: Instruction Tuning Code Large Language Models
from typing import List
from typing import List
def has_close_elements(numbers: List[float], threshold: float) -> bool: for idx, elem in enumerate(numbers): for idx2, elem2 in enumerate(numbers) : if idx != idx2: : if idx != idx2: distance = elem - elem2 if distance < threshold: return True return False return False @@ -4,7 +4,7 @@ for idx, elem in enumerate(numbers): for idx2, elem2 in enumerate(numbers): if idx != idx2: - + distance = elem - elem2 distance = abs(elem - elem2) if distance < threshold: return True
def has_close_elements(numbers: List[float],
# threshold: float) -> bool: for idx, elem in enumerate(numbers):
# for idx2, elem2 in enumerate(numbers) | 2308.07124#163 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07124 | 164 | # threshold: float) -> bool: for idx, elem in enumerate(numbers):
# for idx2, elem2 in enumerate(numbers)
# distance = abs(elem - elem2) if distance < threshold:
# return True
Figure 6: The first problem from the HUMANEVALFIX Python split and the necessary change to fix the bug in unified diff format. Top: Code with and without the bug from Figure 11. Bottom: Necessary change to fix the bug in unified diff format.
- + 7 7 distance = elem - elem2 distance = abs(elem - elem2)
# Figure 7: The line diff format for the problem from Figure 6.
can be unambiguously applied to the code before the commit to generate the code after the commit, and (2) the code diff should be as short as possible to maximize efficiency by avoiding the inclusion of unchanged code. In Figure 7, we show how our format addresses these. The line diff format keeps track of each change sequentially line-by-line to ensure the code can be correctly modified. By focusing only on the lines that change, we reduce the number of characters in the diff by 70% compared to the unified diff representation in Figure 6. | 2308.07124#164 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 165 | J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghe- mawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Do- han, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pil- lai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee, Z. Zhou, X. Wang, B. Saeta, M. | 2308.07107#165 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 165 | Both the unified diff format and our line diff format require the model to predict line numbers. This is very challenging when training on raw code as models need to count and keep track of line numbers. To simplify line number prediction, we automatically add line numbers to the raw code in the finetuning dataset for the line diff format. This allows the model to simply copy the line number into the output simplifying the diff generation. However, it diminishes efficiency slightly by adding additional input tokens that the model needs to process. | 2308.07124#165 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07124 | 166 | As summarized in Table 11, finetuning SantaCoder using the line diff format significantly improves performance compared to prior finetuning on HUMANEVALFIX across all languages. It also out- performs finetuning using the commit format, which only provides gains on JavaScript and Java compared to no finetuning. However, finetuning on the diff format may converge slower than the commit format as the diff format significantly differs from the raw code seen during pretraining. Figures 8, 9, 10 show line diff generations of our model. A limitation of our current line diff im- plementation is that it does not handle code insertion well. The inserted lines may change the line numbers of all following lines, which can result in problems when applying the diff. Further, the diff format is not useful for HUMANEVALEXPLAIN and HUMANEVALSYNTHESIZE. Future work could consider training models that can both be instructed to use the line diff format, such as for HUMANEVALFIX, but also explain or synthesize code without producing a diff.
33
# OctoPack: Instruction Tuning Code Large Language Models
Model Python JavaScript Java SantaCoder SantaCoder + Commit format finetuning SantaCoder + Line diff format finetuning 7.1 3.8 9.9 4.2 5.3 9.7 1.8 9.2 10.0 | 2308.07124#166 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07124 | 167 | Table 11: Zero-shot pass@1 (%) performance on HUMANEVALFIX of SantaCoder formatting experiments.
- 3 3 + - 12 + 12 - 14 - 15 - 16 - 17 } + 14 + 15 + 16 + 17 + 18 + 19 + 20 + 21 + 22 + 23 } let depth = 0, max_depth = 0; let depth = 0, max_depth = 1; return max_depth; return max_depth - 1; return paren_string.split(â â) .filter(x => x != ââ) .map(x => parseParenGroup(x)); let paren_list = paren_string.split(â â); let nested_parens = paren_list.map(x => parseParenGroup(x)); return nested_parens.reduce((prev, curr) => { if (prev == 0) { return curr; } else { return curr - 1; } });
Figure 8: A line diff generation of our model on a JavaScript HUMANEVALFIX problem.
18 + 18
if (current_depth < 0) { if (current_depth < 0 && current_string.length() > 0) {
- 18 if (current_depth < 0) {
18 if (current_depth < 0 && current_string.length() > 0) | 2308.07124#167 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07124 | 168 | - 18 if (current_depth < 0) {
18 if (current_depth < 0 && current_string.length() > 0)
Figure 9: A line diff generation of our model on a Java HUMANEVALFIX problem.
- - + + 2 3 2 3 for i, l1 in enumerate(l): for j in range(i, len(l)): for i in range(0, len(l)): for j in range(i+1, len(l)):
# Figure 10: A line diff generation of our model on a Python HUMANEVALFIX problem.
34
# OctoPack: Instruction Tuning Code Large Language Models
I RESULTS ON HUMANEVALFIXDOCS | 2308.07124#168 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 169 | [79] A. Lewkowycz, A. Andreassen, D. Dohan, E. Dyer, H. Michalewski, V. V. Ramasesh, A. Slone, C. Anil, I. Schlag, T. Gutman-Solo, Y. Wu, B. Neyshabur, G. Gur-Ari, and V. Misra, âSolving quantitative rea- soning problems with language models,â in NeurIPS, 2022.
[80] OpenAI, âGPT-4 technical report,â CoRR, vol.
abs/2303.08774, 2023. J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai, E. Rutherford, D. de Las Casas, L. A. Hen- dricks, J. Welbl, A. Clark, T. Hennigan, E. Noland, K. Millican, G. van den Driessche, B. Damoc, A. Guy, S. Osindero, K. Simonyan, E. Elsen, J. W. Rae, O. Vinyals, and L. Sifre, âTraining compute-optimal large language models,â CoRR, vol. abs/2203.15556, 2022. | 2308.07107#169 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 169 | 34
# OctoPack: Instruction Tuning Code Large Language Models
I RESULTS ON HUMANEVALFIXDOCS
The default version of HUMANEVALFIX does not include docstrings, but only provides the unit tests to the model alongside the buggy function. An alternative is providing docstrings as the source of ground truth for the model to fix the buggy function. Solving from docstrings is generally easier for models than from tests, as models can also solve it via pure code synthesis without looking at the buggy function at all. We provide results of some models on this variant in Table 12. For StarCoder, we distinguish two prompting formats: An instruction to fix bugs like in Figure 3 or the commit format it has seen during pretraining (Appendix G). OCTOCODER performs very strongly on this variant. Diff Codegen 2B (Bradley et al., 2023) performs poorly as its predicted code diffs are often irrelevant to the actual bug, see Figure 38. | 2308.07124#169 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 170 | [81]
[82] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, âLora: Low-rank adaptation of large language models,â in The Tenth International Con- ference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. [83] X. L. Li and P. Liang, âPrefix-tuning: Optimizing continuous prompts for generation,â in Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1- 6, 2021, C. Zong, F. Xia, W. Li, and R. Navigli, Eds. Association for Computational Linguistics, 2021, pp. 4582â4597. | 2308.07107#170 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 170 | Model Python JavaScript Java Go C++ Rust Avg. Non-permissive models GPT-4 88.4 80.5 82.9 81.1 82.3 68.9 80.7 Permissive Models Diff Codegen 2B StarCoder Commit Format StarCoder Instruct Format OCTOCODER 0.0 43.5 41.7 53.8 0.1 29.3 30.7 48.1 0.0 45.7 44.3 54.3 0.3 31.9 34.5 54.9 0.0 28.1 28.7 49.2 0.2 19.4 14.0 32.1 0.1 27.1 26.5 48.7
Table 12: Zero-shot pass@1 (%) performance on HUMANEVALFIXDOCS.
J FULL INSTRUCTION DATA ABLATIONS | 2308.07124#170 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 171 | [84] B. Lester, R. Al-Rfou, and N. Constant, âThe power of scale for parameter-efficient prompt tuning,â in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, M. Moens, X. Huang, L. Specia, and S. W. Yih, Eds. Association for Computational Linguistics, 2021,
26
pp. 3045â3059.
[85] T. Dettmers, A. Pagnoni, A. Holtzman, and L. Zettle- moyer, âQlora: Efficient finetuning of quantized llms,â CoRR, vol. abs/2305.14314, 2023.
[86] L. Wang, N. Yang, and F. Wei, âQuery2doc: Query expansion with large language models,â pp. 9414â 9423, 2023. | 2308.07107#171 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 171 | We results of We provide than COM- try some additional mixtures, be to MITPACKFT + OASST. We <commit_before>old code<commit_msg>message<commit_after>new code for COMMITPACKFT and <commit_before><commit_msg>input<commit_after>output for OASST referred to as the "Formatting" ablation. We hypothesized that aligning the formatting during instruction tuning with the commit format that we used during pretraining (Appendix G) would improve performance. While it seems to improve performance for HUMANEVALFIX compared to our default formatting (see Figure 17), it reduces performance on the other tasks leading to a worse average score of 35.3 in Table 13. "Target Loss" refers to an ablation where we mask loss for inputs as is commonly done during instruction tuning (Muennighoff et al., 2022b). While this leads to the best performance on HUMANEVALSYNTHESIZE, its average performance is worse compared to COMMITPACKFT + OASST, where the loss is computed over the full sequence. We also perform an ablation where we manually select 1178 high-quality samples (725 from OASST and 89, 61, 86, 72, 70 and 75 from | 2308.07124#171 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 172 | [87] N. A. Jaleel, J. Allan, W. B. Croft, F. Diaz, L. S. Larkey, X. Li, M. D. Smucker, and C. Wade, âUmass at TREC 2004: Novelty and HARD,â in Proceedings of the Thirteenth Text REtrieval Conference, TREC 2004, Gaithersburg, Maryland, USA, November 16-19, 2004, ser. NIST Special Publication, E. M. Voorhees and L. P. Buckland, Eds., vol. 500-261. National Institute of Standards and Technology (NIST), 2004.
[88] D. Metzler and W. B. Croft, âLatent concept expan- sion using markov random fields,â in SIGIR 2007: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Amsterdam, The Netherlands, July 23-27, 2007, W. Kraaij, A. P. de Vries, C. L. A. Clarke, N. Fuhr, and N. Kando, Eds. ACM, 2007, pp. 311â318. | 2308.07107#172 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 172 | sequence. We also perform an ablation where we manually select 1178 high-quality samples (725 from OASST and 89, 61, 86, 72, 70 and 75 from COMMITPACKFT for Python, JavaScript, Java, Go, C++ and Rust, respectively). However, this manual selection did not outperform random selection for OCTOCODER. It performed better for OCTOGEEX, however, hence we used it for OCTOGEEX. We hypothesize that our models could achieve significantly better performance by further improving the quality of the instruction data beyond. This may necessitate very careful human selection of samples and manual editing of the data to ensure a uniform style in the outputs. We leave such explorations to future work. | 2308.07124#172 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 173 | [89] C. Zhai and J. D. Lafferty, âModel-based feedback in the language modeling approach to information retrieval,â in Proceedings of the 2001 ACM CIKM Inter- national Conference on Information and Knowledge Man- agement, Atlanta, Georgia, USA, November 5-10, 2001. ACM, 2001, pp. 403â410.
[90] D. Metzler and W. B. Croft, âA markov random field model for term dependencies,â in SIGIR 2005: Pro- ceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Salvador, Brazil, August 15-19, 2005, R. A. Baeza-Yates, N. Ziviani, G. Marchionini, A. Moffat, and J. Tait, Eds. ACM, 2005, pp. 472â479. | 2308.07107#173 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 173 | 35
# OctoPack: Instruction Tuning Code Large Language Models
Instruction Tuning Dataset (â) HUMANEVALPACK Python Fix Explain Synthesize Average Without instruction tuning 8.7 0.0 33.6 14.1 Self-Instruct (SI) OASST SI + OASST xP3x + OASST COMMITPACKFT + OASST COMMITPACKFT + OASST (Formatting) COMMITPACKFT + OASST (Target loss) COMMITPACKFT + OASST (Manual) COMMITPACKFT + xP3x + OASST COMMITPACKFT + SI + xP3x + OASST 23.6 23.1 24.9 28.4 30.4 31.1 29.8 27.2 30.9 31.4 0.6 34.5 28.7 28.4 35.1 28.9 31.2 29.6 29.5 33.8 43.0 46.4 46.2 45.0 46.2 45.8 47.8 45.8 45.9 46.0 22.2 34.7 33.3 33.9 37.2 35.3 36.3 34.2 35.4 37.1 | 2308.07124#173 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 174 | [91] X. Wang, C. Macdonald, N. Tonellotto, and I. Ounis, âPseudo-relevance feedback for multiple representa- tion dense retrieval,â in ICTIR â21: The 2021 ACM SI- GIR International Conference on the Theory of Information Retrieval, Virtual Event, Canada, July 11, 2021, F. Hasibi, Y. Fang, and A. Aizawa, Eds. ACM, 2021, pp. 297â 306.
[92] Z. Zheng, K. Hui, B. He, X. Han, L. Sun, and A. Yates, âBERT-QE: contextualized query expansion for doc- ument re-ranking,â in Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, ser. Findings of ACL, T. Cohn, Y. He, and Y. Liu, Eds., vol. EMNLP 2020. Association for Computational Linguistics, 2020, pp. 4718â4728. | 2308.07107#174 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 174 | Table 13: Zero-shot pass@1 (%) performance across the Python split of HUMANEVALPACK for StarCoder instruction tuning data ablations.
# K HUMANEVALFIX BUG TYPES
Table 14 contains an overview of bugs that were manually added by one of the authors to HumanEval solutions for the construction of HUMANEVALFIX. Figures 11-16 contain an example of each type from the Python split. The bug type for each problem is the same across all programming languages in HUMANEVALFIX, but for a few samples it affects a different part of the solution due to the code solutions not being perfectly parallel across languages.
Bug type Subtype Explanation Example Missing logic Excess logic Wrong logic Value misuse Operator misuse Variable misuse Function misuse Misses code needed to solve the problem Figure 11 Figure 12 Contains excess code leading to mistakes Figure 13 An incorrect value is used Figure 14 An incorrect operator is used Figure 15 An incorrect variable is used Figure 16 An incorrect function is used Total Count 33 31 44 25 23 8 164
# Table 14: HUMANEVALFIX bug types.
36
# OctoPack: Instruction Tuning Code Large Language Models
from typing import List
from typing import List | 2308.07124#174 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 175 | [93] F. Diaz, B. Mitra, and N. Craswell, âQuery expansion with locally-trained word embeddings,â in Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics, 2016.
[94] S. Kuzi, A. Shtok, and O. Kurland, âQuery expan- sion using word embeddings,â in Proceedings of the 25th ACM International Conference on Information and Knowledge Management, CIKM 2016, Indianapolis, IN, USA, October 24-28, 2016, S. Mukhopadhyay, C. Zhai, E. Bertino, F. Crestani, J. Mostafa, J. Tang, L. Si,
X. Zhou, Y. Chang, Y. Li, and P. Sondhi, Eds. ACM, 2016, pp. 1929â1932. | 2308.07107#175 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 175 | # Table 14: HUMANEVALFIX bug types.
36
# OctoPack: Instruction Tuning Code Large Language Models
from typing import List
from typing import List
def has_close_elements(numbers: List[float ], threshold: float) -> bool: """ Check if in given list of numbers, are any two numbers closer to each other than given threshold. >>> has_close_elements([1.0, 2.0, 3.0], 0.5) False >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) True """ for idx, elem in enumerate(numbers): for idx2, elem2 in enumerate( numbers): if idx != idx2: distance = abs(elem - elem2) if distance < threshold: return True
# def has_close_elements(numbers: List[float | 2308.07124#175 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 176 | X. Zhou, Y. Chang, Y. Li, and P. Sondhi, Eds. ACM, 2016, pp. 1929â1932.
[95] K. Mao, Z. Dou, F. Mo, J. Hou, H. Chen, and H. Qian, âLarge language models know your contextual search intent: A prompting framework for conversational search,â pp. 1211â1225, 2023. I. Mackie, I. Sekulic, S. Chatterjee, J. Dalton, and F. Crestani, âGRM: generative relevance modeling us- ing relevance-aware sample estimation for document retrieval,â CoRR, vol. abs/2306.09938, 2023.
[96] | 2308.07107#176 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 176 | # def has_close_elements(numbers: List[float
], threshold: float) -> bool: """ Check if in given list of numbers, are any two numbers closer to each other than given threshold. >>> has_close_elements([1.0, 2.0, 3.0], 0.5) False >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) True """ for idx, elem in enumerate(numbers): for idx2, elem2 in enumerate( numbers): if idx != idx2: distance = elem - elem2 if distance < threshold: return True
return False
return False
Figure 11: Missing logic bug example. The buggy code (right) misses the âabsâ statement.
def truncate_number(number: float) -> float: """ Given a positive floating point number, it can be decomposed into and integer part (largest integer smaller than given number) and decimals (leftover part always smaller than 1). Return the decimal part of the number. >>> truncate_number(3.5) 0.5 """ return number % 1.0 | 2308.07124#176 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 177 | [96]
[97] K. Srinivasan, K. Raman, A. Samanta, L. Liao, L. Bertelli, and M. Bendersky, âQUILL: query intent with large language models using retrieval augmen- tation and multi-stage distillation,â in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: EMNLP 2022 - Industry Track, Abu Dhabi, UAE, December 7 - 11, 2022, Y. Li and A. Lazaridou, Eds. Association for Computational Linguistics, 2022, pp. 492â501. J. Feng, C. Tao, X. Geng, T. Shen, C. Xu, G. Long, D. Zhao, and D. Jiang, âKnowledge refinement via in- teraction between search engines and large language models,â CoRR, vol. abs/2305.07402, 2023. I. Mackie, S. Chatterjee, and J. Dalton, âGenerative and pseudo-relevant feedback for sparse, dense and learned sparse retrieval,â CoRR, vol. abs/2305.07477, 2023. | 2308.07107#177 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 177 | def truncate_number(number: float) -> float: """ Given a positive floating point number, it can be decomposed into and integer part (largest integer smaller than given number) and decimals (leftover part always smaller than 1). Return the decimal part of the number. >>> truncate_number(3.5) 0.5 """ return number % 1.0 + 1.0
Figure 12: Excess logic bug example. The buggy code (right) incorrectly adds 1 to the result.
from typing import List, Tuple
from typing import List, Tuple
def sum_product(numbers: List[int]) -> Tuple[int, int]: """ For a given list of integers, return a tuple consisting of a sum and a product of all the integers in a list. Empty sum should be equal to 0 and empty product should be equal to 1. >>> sum_product([]) (0, 1) >>> sum_product([1, 2, 3, 4]) (10, 24) """ sum_value = 0 prod_value = 1 for n in numbers: sum_value += n prod_value *= n return sum_value, prod_value | 2308.07124#177 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07124 | 178 | def sum_product(numbers: List[int]) -> Tuple[int, int]: """ For a given list of integers, return a tuple consisting of a sum and a product of all the integers in a list. Empty sum should be equal to 0 and empty product should be equal to 1. >>> sum_product([]) (0, 1) >>> sum_product([1, 2, 3, 4]) (10, 24) """ sum_value = 0 prod_value = 0 for n in numbers: sum_value += n prod_value *= n return sum_value, prod_value
Figure 13: Value misuse bug example. The buggy code (right) incorrectly initializes the product to 0.
37
# OctoPack: Instruction Tuning Code Large Language Models
from typing import List
from typing import List
# def below_zero(operations: List[int]) ->
bool: """ Youâre given a list of deposit and withdrawal operations on a bank account that starts with zero balance. Your task is to detect if at any point the balance of account fallls below zero, and at that point function should return True. Otherwise it should return False. >>> below_zero([1, 2, 3]) False >>> below_zero([1, 2, -4, 5]) True """ balance = 0 for op in operations: balance += op if balance < 0: return True | 2308.07124#178 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
2308.07107 | 179 | [102] R. Jagerman, H. Zhuang, Z. Qin, X. Wang, and M. Ben- dersky, âQuery expansion by prompting large lan- guage models,â CoRR, vol. abs/2305.03653, 2023. [103] Y. Tang, R. Qiu, and X. Li, âPrompt-based effec- tive input reformulation for legal case retrieval,â in Databases Theory and Applications - 34th Australasian Database Conference, ADC 2023, Melbourne, VIC, Aus- tralia, November 1-3, 2023, Proceedings, ser. Lecture Notes in Computer Science, Z. Bao, R. Borovica-Gajic, R. Qiu, F. M. Choudhury, and Z. Yang, Eds., vol. 14386. Springer, 2023, pp. 87â100.
[104] F. Ye, M. Fang, S. Li, and E. Yilmaz, âEnhanc- ing conversational search: Large language model- aided informative query rewriting,â arXiv preprint arXiv:2310.09716, 2023. | 2308.07107#179 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR)
systems, such as search engines, have integrated themselves into our daily
lives. These systems also serve as components of dialogue, question-answering,
and recommender systems. The trajectory of IR has evolved dynamically from its
origins in term-based methods to its integration with advanced neural models.
While the neural models excel at capturing complex contextual signals and
semantic nuances, thereby reshaping the IR landscape, they still face
challenges such as data scarcity, interpretability, and the generation of
contextually plausible yet potentially inaccurate responses. This evolution
requires a combination of both traditional methods (such as term-based sparse
retrieval methods with rapid response) and modern neural architectures (such as
language models with powerful language understanding capacity). Meanwhile, the
emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has
revolutionized natural language processing due to their remarkable language
understanding, generation, generalization, and reasoning abilities.
Consequently, recent research has sought to leverage LLMs to improve IR
systems. Given the rapid evolution of this research trajectory, it is necessary
to consolidate existing methodologies and provide nuanced insights through a
comprehensive overview. In this survey, we delve into the confluence of LLMs
and IR systems, including crucial aspects such as query rewriters, retrievers,
rerankers, and readers. Additionally, we explore promising directions, such as
search agents, within this expanding field. | http://arxiv.org/pdf/2308.07107 | Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, Ji-Rong Wen | cs.CL, cs.IR | updated to version 2 | null | cs.CL | 20230814 | 20240119 | [
{
"id": "2305.03195"
},
{
"id": "2310.09716"
},
{
"id": "2311.01555"
},
{
"id": "2312.02969"
},
{
"id": "2306.17563"
}
] |
2308.07124 | 179 | # def below_zero(operations: List[int]) ->
bool: """ Youâre given a list of deposit and withdrawal operations on a bank account that starts with zero balance. Your task is to detect if at any point the balance of account fallls below zero, and at that point function should return True. Otherwise it should return False. >>> below_zero([1, 2, 3]) False >>> below_zero([1, 2, -4, 5]) True """ balance = 0 for op in operations: balance += op if balance == 0: return True
return False
# return False
Figure 14: Operator misuse bug example. The buggy code (right) incorrectly checks for equality with 0.
from typing import List
from typing import List
def mean_absolute_deviation(numbers: List[ float]) -> float: """ For a given list of input numbers, calculate Mean Absolute Deviation around the mean of this dataset. Mean Absolute Deviation is the average absolute difference between each element and a centerpoint (mean in this case): MAD = average | x - x_mean | >>> mean_absolute_deviation([1.0, 2.0, 3.0, 4.0]) 1.0 """ mean = sum(numbers) / len(numbers) return sum(abs(x - mean) for x in numbers) / len(numbers) | 2308.07124#179 | OctoPack: Instruction Tuning Code Large Language Models | Finetuning large language models (LLMs) on instructions leads to vast
performance improvements on natural language tasks. We apply instruction tuning
using code, leveraging the natural structure of Git commits, which pair code
changes with human instructions. We compile CommitPack: 4 terabytes of Git
commits across 350 programming languages. We benchmark CommitPack against other
natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B
parameter StarCoder model, and achieve state-of-the-art performance among
models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2%
pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark
to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis)
across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models,
OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among
all permissive models, demonstrating CommitPack's benefits in generalizing to a
wider set of languages and natural coding tasks. Code, models and data are
freely available at https://github.com/bigcode-project/octopack. | http://arxiv.org/pdf/2308.07124 | Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre | cs.CL, cs.AI | 57 pages (9 main), 39 figures, 16 tables | null | cs.CL | 20230814 | 20230814 | [
{
"id": "2302.00288"
},
{
"id": "2205.12374"
},
{
"id": "2204.05999"
},
{
"id": "2105.09352"
},
{
"id": "2212.12017"
},
{
"id": "2305.09857"
},
{
"id": "2304.12244"
},
{
"id": "2307.03025"
},
{
"id": "2204.06745"
},
{
"id": "2301.08653"
},
{
"id": "2209.13331"
},
{
"id": "2208.11663"
},
{
"id": "2212.10007"
},
{
"id": "2303.14100"
},
{
"id": "1707.02275"
},
{
"id": "2304.03816"
},
{
"id": "2302.01973"
},
{
"id": "2302.05527"
},
{
"id": "2306.03091"
},
{
"id": "2305.13169"
},
{
"id": "2306.08568"
},
{
"id": "2303.12712"
},
{
"id": "2305.14314"
},
{
"id": "2305.18507"
},
{
"id": "2202.08904"
},
{
"id": "2306.15595"
},
{
"id": "2301.13246"
},
{
"id": "2105.09938"
},
{
"id": "2211.09085"
},
{
"id": "2303.12570"
},
{
"id": "2207.14255"
},
{
"id": "2302.04166"
},
{
"id": "2005.00653"
},
{
"id": "2211.05100"
},
{
"id": "2206.08896"
},
{
"id": "2105.14242"
},
{
"id": "2305.07922"
},
{
"id": "2108.07732"
},
{
"id": "2102.04664"
},
{
"id": "2207.11280"
},
{
"id": "2305.11738"
},
{
"id": "1901.02860"
},
{
"id": "2306.04556"
},
{
"id": "1908.09804"
},
{
"id": "2111.03922"
},
{
"id": "2112.02721"
},
{
"id": "2301.03988"
},
{
"id": "2210.14868"
},
{
"id": "2304.01102"
},
{
"id": "2305.16264"
},
{
"id": "2303.17568"
},
{
"id": "2305.01210"
},
{
"id": "2306.02858"
},
{
"id": "2305.13048"
},
{
"id": "2209.07858"
},
{
"id": "2209.14876"
},
{
"id": "2306.10998"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2207.10397"
},
{
"id": "2307.02053"
},
{
"id": "2305.15717"
},
{
"id": "2302.07867"
},
{
"id": "2210.15424"
},
{
"id": "2204.05862"
},
{
"id": "2304.07590"
},
{
"id": "2307.03172"
},
{
"id": "2307.02469"
},
{
"id": "2308.01861"
},
{
"id": "2108.04631"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2212.09535"
},
{
"id": "2305.03726"
},
{
"id": "2304.14317"
},
{
"id": "2304.05128"
},
{
"id": "2305.02309"
},
{
"id": "2210.07316"
},
{
"id": "2306.11644"
},
{
"id": "2304.07327"
},
{
"id": "2211.15395"
},
{
"id": "2212.09803"
},
{
"id": "2302.05020"
},
{
"id": "2303.03004"
},
{
"id": "2211.01910"
},
{
"id": "2107.03374"
},
{
"id": "2211.01786"
},
{
"id": "2108.12409"
},
{
"id": "2306.04751"
},
{
"id": "2307.09288"
},
{
"id": "2304.08485"
},
{
"id": "2204.07705"
},
{
"id": "2203.13474"
},
{
"id": "2203.08388"
},
{
"id": "2305.06161"
},
{
"id": "2306.00029"
},
{
"id": "2212.10481"
},
{
"id": "2304.11158"
},
{
"id": "2206.08474"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2305.18584"
},
{
"id": "1911.02150"
},
{
"id": "2305.11206"
},
{
"id": "2211.15533"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.