id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2308.07124#79 | OctoPack: Instruction Tuning Code Large Language Models | # B ARTIFACTS Other models Diff Codegen 2B (Bradley et al., 2023) InstructCodeT5+ (Wang et al., 2023c) BLOOMZ (Muennighoff et al., 2022b) StarChat-β (Tunstall et al., 2023) CodeGeeX2 (Zheng et al., 2023) SantaCoder (Allal et al., 2023) StarCoder (Li et al., 2023b) WizardCoder (Luo et al., 2023) GPT-4 (OpenAI, 2023) https://hf.co/CarperAI/diff-codegen-2b-v2 https://hf.co/Salesforce/instructcodet5p-16b https://hf.co/bigscience/bloomz https://hf.co/HuggingFaceH4/starchat-beta https://github.com/THUDM/CodeGeeX2 https://hf.co/bigcode/santacoder https://hf.co/bigcode/starcoder https://hf.co/WizardLM/WizardCoder-15B-V1.0 https://openai.com/gpt-4 Data Ablations (Appendix J) - Data Filtered xP3x code StarCoder Self-Instruct Filtered OASST Manual selection (Appendix J) https://hf.co/datasets/bigcode/xp3x-octopack https://hf.co/datasets/codeparrot/self-instruct-starcoder https://hf.co/datasets/bigcode/oasst-octopack https://hf.co/datasets/bigcode/co-manual Data Ablations (Appendix J) - Models Self-Instruct (SI) OASST (O) SI + O xP3x + O COMMITPACKFT + O (Formatting) COMMITPACKFT + O (Target loss) COMMITPACKFT + O (Manual) COMMITPACKFT + xP3x + O COMMITPACKFT + xP3x + SI + O https://hf.co/bigcode/starcoder-s https://hf.co/bigcode/starcoder-o https://hf.co/bigcode/starcoder-so https://hf.co/bigcode/starcoder-xo https://hf.co/bigcode/starcoder-co-format https://hf.co/bigcode/starcoder-co-target https://hf.co/bigcode/starcoder-co-manual https://hf.co/bigcode/starcoder-cxo https://hf.co/bigcode/starcoder-cxso SantaCoder ablations (Appendix G, Appendix H) Commit format Pretraining Commit format Finetuning Line diff format Finetuning https://hf.co/bigcode/santacoderpack https://hf.co/bigcode/santacoder-cf https://hf.co/bigcode/santacoder-ldf Other datasets COMMITPACK Metadata https://hf.co/datasets/bigcode/commitpackmeta Main artifacts COMMITPACK COMMITPACKFT HUMANEVALPACK OCTOGEEX OCTOCODER https://hf.co/datasets/bigcode/commitpack https://hf.co/datasets/bigcode/commitpackft https://hf.co/datasets/bigcode/humanevalpack https://hf.co/bigcode/octogeex https://hf.co/bigcode/octocoder | 2308.07124#78 | 2308.07124#80 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#80 | OctoPack: Instruction Tuning Code Large Language Models | # Table 3: Used and produced artifacts. 21 # OctoPack: Instruction Tuning Code Large Language Models C COMMITPACK AND COMMITPACKFT LANGUAGES COMMITPACK Language (â ) MB Samples % (MB) Total 3709175.78 57700105 100.0 COMMITPACKFT MB Samples % (MB) 1545.02 702062 100.0 json xml text javascript objective-c++ python c c++ markdown java html yaml go csv php jupyter-notebook gettext-catalog sql unity3d-asset typescript owl ruby c# nix shell perl tex css restructuredtext rust groff ini scala coffeescript haskell swift lua svg gas ocaml erlang makefile asciidoc emacs-lisp scss clojure org common-lisp diff groovy html+erb nesc 583293.82 279208.68 270662.6 262824.84 239009.3 234311.56 200876.8 186585.26 171849.95 127103.45 105305.28 100466.64 86444.62 82946.19 74961.64 66854.08 62296.88 56802.76 39535.01 39254.8 36435.46 35830.74 33669.65 33547.92 25109.95 21148.93 17471.11 16306.63 15613.89 15011.3 12020.19 8375.16 8325.96 6795.14 6306.12 5902.72 5763.12 5645.44 5585.38 5355.4 5043.32 4238.51 4138.59 3988.65 3944.94 3523.41 3126.22 2954.9 2586.05 2569.14 2450.68 2439.56 | 2308.07124#79 | 2308.07124#81 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#81 | OctoPack: Instruction Tuning Code Large Language Models | 3495038 1923159 1389525 5401937 32227 6189601 2779478 2402294 7645354 3744377 2366841 2592787 1183612 79268 2555419 94000 168327 132772 17867 572136 7458 2928702 923157 221281 1017977 374266 89283 548818 494037 296214 32923 297100 316064 292446 217325 319289 139091 27095 15121 81360 93685 343379 96671 83228 288190 158674 30198 74628 21021 110057 225379 473 15.73 7.53 7.3 7.09 6.44 6.32 5.42 5.03 4.63 3.43 2.84 2.71 2.33 2.24 2.02 1.8 1.68 1.53 1.07 1.06 0.98 0.97 0.91 0.9 0.68 0.57 0.47 0.44 0.42 0.4 0.32 0.23 0.22 0.18 0.17 0.16 0.16 0.15 0.15 0.14 0.14 0.11 0.11 0.11 0.11 0.09 0.08 0.08 0.07 0.07 0.07 0.07 | 2308.07124#80 | 2308.07124#82 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#82 | OctoPack: Instruction Tuning Code Large Language Models | 86.74 23.68 66.66 125.01 0.38 132.68 21.08 14.14 131.15 56.28 48.42 190.88 12.13 0.53 60.22 0.1 0.13 3.74 0.16 14.28 0 195.29 26.84 3.84 66.86 4.99 0.56 9.36 15.73 7.24 0.4 21.04 11.18 16.96 3.31 16.27 1.85 0.25 0.34 0.7 1.19 2.53 1.86 1.97 13.21 5.07 0.27 1.45 1.48 4.17 23.1 0.02 39777 9337 46588 52989 86 56025 8506 4992 62518 20635 20214 114320 5004 375 24791 48 72 2069 101 5868 0 69413 9346 1593 31217 2288 307 5049 6560 2996 192 11360 5040 5513 1389 4849 920 169 193 333 480 960 523 1015 6829 2403 136 778 680 1486 10910 7 22 | 2308.07124#81 | 2308.07124#83 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#83 | OctoPack: Instruction Tuning Code Large Language Models | 5.61 1.53 4.31 8.09 0.02 8.59 1.36 0.92 8.49 3.64 3.13 12.35 0.79 0.03 3.9 0.01 0.01 0.24 0.01 0.92 0.0 12.64 1.74 0.25 4.33 0.32 0.04 0.61 1.02 0.47 0.03 1.36 0.72 1.1 0.21 1.05 0.12 0.02 0.02 0.05 0.08 0.16 0.12 0.13 0.86 0.33 0.02 0.09 0.1 0.27 1.5 0.0 # OctoPack: Instruction Tuning Code Large Language Models | 2308.07124#82 | 2308.07124#84 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#84 | OctoPack: Instruction Tuning Code Large Language Models | dart powershell f#t dm kotlin pascal jsx viml actionscript cython turtle less mathematica xslt scheme perl6 edn ortran java-server-pages standard-ml cmake json5S vala vue reemarker graphql twig tel pod dockerfile yacc postscript racket eagle haxe julia handlebars smarty visual-basic literate-haskell smalltalk isabelle nimrod zig m4 max elixir mako 2395.8 2289.28 2289.24 2223.14 2219.25 2194.68 2124.74 948.21 844.15 736.59 698.95 616.56 475.04 441.46 249.24 223.16 186.94 178.55 173.07 133.48 132.07 1108.2 104.51 1093.8 032.33 004.84 958.96 869.83 859.02 849.73 845.7 800.73 796.64 785.68 772.9 752.07 740.82 720.94 681.52 673.74 665.89 655.82 652.86 621.38 603.58 603.56 558.12 543.01 56873 55381 66840 55584 124266 42511 139148 74062 28819 25927 3882 88634 925 27956 30546 12167 2289 13463 53574 20097 58446 1827 14822 68967 36216 2009 39588 16407 14922 259379 8230 903 16615 2237 28447 22695 49842 41065 10511 10729 11741 8359 12023 4290 12465 2259 35473 8943 1.96 2.06 0.66 0.15 5.37 0.05 5.5 1.96 0.12 0.31 0.05 3.72 0.01 0.26 0.42 0.27 0.09 0.14 0.45 0.15 2.27 0.08 0.12 1.38 1.03 0.03 3.96 0.29 0.15 0.1 0.01 0.02 0.2 0.01 0.34 0.31 3.29 1.59 0.15 0.02 0.46 0.01 0.24 0.01 0.26 2.35 0.76 765 991 254 16 2214 25 2199 1063 123 21 1360 99 213 122 48 70 173 72 981 33 587 510 17 1610 103 54 39 117 174 180 1429 737 48 284 67 101 1150 170 0.13 0.13 0.04 0.01 0.35 0.0 0.36 0.13 0.01 0.02 0.0 0.24 0.0 0.02 0.03 0.02 0.01 0.01 0.03 0.01 0.15 0.01 0.01 0.09 0.07 0.02 | 2308.07124#83 | 2308.07124#85 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#85 | OctoPack: Instruction Tuning Code Large Language Models | dart powershell f# dm kotlin pascal jsx viml actionscript cython turtle less mathematica xslt scheme perl6 edn fortran java-server-pages standard-ml cmake json5 vala vue freemarker graphql twig tcl pod dockerfile yacc postscript racket eagle haxe julia handlebars smarty visual-basic literate-haskell smalltalk isabelle nimrod zig m4 max elixir mako arduino jade haml elm purebasic coldfusion lean r cuda textile robotframework 2395.8 2289.28 2289.24 2223.14 2219.25 2194.68 2124.74 1948.21 1844.15 1736.59 1698.95 1616.56 1475.04 1441.46 1249.24 1223.16 1186.94 1178.55 1173.07 1133.48 1132.07 1108.2 1104.51 1093.8 1032.33 1004.84 958.96 869.83 859.02 849.73 845.7 800.73 796.64 785.68 772.9 752.07 740.82 720.94 681.52 673.74 665.89 655.82 652.86 621.38 603.58 603.56 558.12 543.01 534.18 531.4 502.01 481.97 474.28 470.78 470.03 454.32 437.67 425.12 421.61 | 2308.07124#84 | 2308.07124#86 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#86 | OctoPack: Instruction Tuning Code Large Language Models | 56873 55381 66840 55584 124266 42511 139148 74062 28819 25927 3882 88634 925 27956 30546 12167 2289 13463 53574 20097 58446 1827 14822 68967 36216 2009 39588 16407 14922 259379 8230 903 16615 2237 28447 22695 49842 41065 10511 10729 11741 8359 12023 4290 12465 2259 35473 8943 32350 46993 74792 18542 36 9263 7507 12858 11450 18491 9211 0.06 0.06 0.06 0.06 0.06 0.06 0.06 0.05 0.05 0.05 0.05 0.04 0.04 0.04 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 | 2308.07124#85 | 2308.07124#87 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#87 | OctoPack: Instruction Tuning Code Large Language Models | 1.96 2.06 0.66 0.15 5.37 0.05 5.5 1.96 0.12 0.31 0.05 3.72 0.01 0.26 0.42 0.27 0.09 0.14 0.45 0.15 2.27 0.08 0.12 1.38 1.03 0.03 3.96 0.29 0.15 0.1 0.01 0.02 0.2 0.01 0.34 0.31 3.29 1.59 0.15 0.02 0.46 0.01 0.24 0.01 0.26 0 2.35 0.76 0.46 2.35 10.74 0.62 0.02 0.02 0.02 0.23 0.07 0.18 0.21 765 991 254 16 2214 25 2199 1063 49 123 21 1360 1 99 213 122 48 70 173 72 981 33 50 587 510 17 1610 103 54 39 3 9 117 4 174 180 1429 737 48 7 284 2 67 4 101 0 1150 170 225 1119 4415 265 5 9 3 121 25 61 85 23 | 2308.07124#86 | 2308.07124#88 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#88 | OctoPack: Instruction Tuning Code Large Language Models | 0.13 0.13 0.04 0.01 0.35 0.0 0.36 0.13 0.01 0.02 0.0 0.24 0.0 0.02 0.03 0.02 0.01 0.01 0.03 0.01 0.15 0.01 0.01 0.09 0.07 0.0 0.26 0.02 0.01 0.01 0.0 0.0 0.01 0.0 0.02 0.02 0.21 0.1 0.01 0.0 0.03 0.0 0.02 0.0 0.02 0.0 0.15 0.05 0.03 0.15 0.7 0.04 0.0 0.0 0.0 0.01 0.0 0.01 0.01 # OctoPack: Instruction Tuning Code Large Language Models | 2308.07124#87 | 2308.07124#89 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#89 | OctoPack: Instruction Tuning Code Large Language Models | abap 409.62 1955 0.0 0.01 1 0.0 rdoc 397.03 38760 0.0 0.55 270 0.04 Ilvm 382.2 10727 0.0 1.6 780 0.1 ada 380.7 13258 0.0 0.73 265 0.05 batchfile 372.16 43674 0.0 2.98 1466 0.19 qml 361.45 19360 0.0 0.94 368 0.06 jasmin 359.82 4782 0.0 0.05 9 0.0 assembly 343.62 8126 0.0 0.17 105 0.01 g-code 334.96 3690 0.0 0.04 7 0.0 cucumber 331.38 26677 0.0 2.59 976 0.17 html+php 323.35 18381 0.0 0.33 150 0.02 icad 321.94 759 0.0 0 0 0.0 api-blueprint 317.85 4765 0.0 0.06 23 0.0 eiffel 311.48 373 0.0 0.01 2 0.0 toml 292.68 63517 0.0 5.58 3424 0.36 modelica 284.62 2611 0.0 0.04 15 0.0 bitbake 277.58 43239 0.0 4.46 1308 0.29 lex 275.96 705 0.0 0 0 0.0 stylus 273.06 21967 0.0 0.95 480 0.06 protocol-buffer 254.12 9202 0.0 0.52 181 0.03 unknown 252.23 30570 0.0 3.05 1597 0.2 nit 244.54 4951 0.0 0.02 3 0.0 â | 2308.07124#88 | 2308.07124#90 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#90 | OctoPack: Instruction Tuning Code Large Language Models | actor 241.19 15378 0.0 0.36 113 0.02 XS 239.04 3215 0.0 0.02 7 0.0 sass 230.65 23144 0.0 1.36 705 0.09 pir 230.2 6231 0.0 0.08 23 0.0 html+django 217.04 10535 0.0 0.85 399 0.06 mediawiki 214.32 10188 0.0 0.08 33 0.0 logos 212.3 1733 0.0 0.04 19 0.0 genshi 209.3 956 0.0 0.02 3 0.0 coldfusion-cfc 208.16 4410 0.0 0.05 20 0.0 xtend 79.54 7715 0.0 0.13 55 0.0 sqf 68.66 TT18 0.0 0.09 45 0.0 vhdl 55.95 2185 0.0 0.02 5 0.0 antlr 43.55 3651 0.0 0.03 15 0.0 systemverilog 40.19 3944 0.0 0.08 35 0.0 hel 36.75 13379 0.0 0.91 421 0.06 asp 136.1 4286 0.0 0.09 22 0.0 nsis 29.12 4048 0.0 0.06 15 0.0 inform-7 20.19 184 0.0 0.01 2 0.0 slim 19.04 18726 0.0 2.06 1052 0.13 groovy-server-pages 17.37 6695 0.0 0.07 25 0.0 ceylon 16.14 7256 0.0 0.1 49 0.0 fish 11.28 15351 0.0 1.33 813 0.09 processing 08.58 5912 0.0 0.07 35 0.0 component-pascal 105.5 43 0.0 0) 0) 0.0 lasso 04.17 67 0.0 0 0 0.0 glsl 99.49 9478 0.0 0.34 164 0.02 | 2308.07124#89 | 2308.07124#91 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#91 | OctoPack: Instruction Tuning Code Large Language Models | abap rdoc llvm ada batchfile qml jasmin assembly g-code cucumber html+php kicad api-blueprint eiffel toml modelica bitbake lex stylus protocol-buffer unknown nit factor xs sass pir html+django mediawiki logos genshi coldfusion-cfc xtend sqf vhdl antlr systemverilog hcl asp nsis inform-7 slim groovy-server-pages ceylon fish processing component-pascal lasso glsl saltstack xbase autohotkey liquid purescript agda inno-setup oz chapel arc opencl 409.62 397.03 382.2 380.7 372.16 361.45 359.82 343.62 334.96 331.38 323.35 321.94 317.85 311.48 292.68 284.62 277.58 275.96 273.06 254.12 252.23 244.54 241.19 239.04 230.65 230.2 217.04 214.32 212.3 209.3 208.16 179.54 168.66 155.95 143.55 140.19 136.75 136.1 129.12 120.19 119.04 117.37 116.14 111.28 108.58 105.5 104.17 99.49 98.2 94.42 94.22 93.79 92.41 92.06 91.36 90.48 89.62 87.21 86.43 | 2308.07124#90 | 2308.07124#92 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#92 | OctoPack: Instruction Tuning Code Large Language Models | 1955 38760 10727 13258 43674 19360 4782 8126 3690 26677 18381 759 4765 373 63517 2611 43239 705 21967 9202 30570 4951 15378 3215 23144 6231 10535 10188 1733 956 4410 7775 7778 2185 3651 3944 13379 4286 4048 184 18726 6695 7256 15351 5912 43 67 9478 12314 1670 1452 2651 5024 4956 3014 1551 26447 758 2489 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 | 2308.07124#91 | 2308.07124#93 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#93 | OctoPack: Instruction Tuning Code Large Language Models | 0.01 0.55 1.6 0.73 2.98 0.94 0.05 0.17 0.04 2.59 0.33 0 0.06 0.01 5.58 0.04 4.46 0 0.95 0.52 3.05 0.02 0.36 0.02 1.36 0.08 0.85 0.08 0.04 0.02 0.05 0.13 0.09 0.02 0.03 0.08 0.91 0.09 0.06 0.01 2.06 0.07 0.1 1.33 0.07 0 0 0.34 1.41 0.01 0.02 0.09 0.17 0.02 0.06 0.03 0.04 0.01 0.05 1 270 780 265 1466 368 9 105 7 976 150 0 23 2 3424 15 1308 0 480 181 1597 3 113 7 705 23 399 33 19 3 20 55 45 5 15 35 421 22 15 2 1052 25 49 813 35 0 0 164 617 3 15 30 80 10 16 8 20 2 23 24 | 2308.07124#92 | 2308.07124#94 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#94 | OctoPack: Instruction Tuning Code Large Language Models | 0.0 0.04 0.1 0.05 0.19 0.06 0.0 0.01 0.0 0.17 0.02 0.0 0.0 0.0 0.36 0.0 0.29 0.0 0.06 0.03 0.2 0.0 0.02 0.0 0.09 0.01 0.06 0.01 0.0 0.0 0.0 0.01 0.01 0.0 0.0 0.01 0.06 0.01 0.0 0.0 0.13 0.0 0.01 0.09 0.0 0.0 0.0 0.02 0.09 0.0 0.0 0.01 0.01 0.0 0.0 0.0 0.0 0.0 0.0 # OctoPack: Instruction Tuning Code Large Language Models | 2308.07124#93 | 2308.07124#95 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#95 | OctoPack: Instruction Tuning Code Large Language Models | graphviz-dot 85.8 1525 0.0 0.07 35 0.0 pawn 85.42 580 0.0 0.01 3 0.0 jsoniq 75.15 1343 0.0 0.01 6 0.0 bluespec 72.38 2500 0.0 0.01 2 0.0 smali 71.38 174 0.0 0 0 0.0 krl 69.87 1879 0.0 0.02 4 0.0 maple 68.28 1311 0.0 0.01 2 0.0 unrealscript 67.67 585 0.0 0.01 1 0.0 ooc 63.19 3416 0.0 0.04 15 0.0 pure-data 62.62 603 0.0 0.01 1 0.0 xquery 61.96 2237 0.0 0.08 39 0.01 del 59.64 833 0.0 0.04 19 0.0 moonscript 59.21 1951 0.0 0.02 10 0.0 awk 57.18 2206 0.0 0.1 52 0.01 pike 52.87 1262 0.0 0.02 6 0.0 livescript 51.23 5194 0.0 0.13 63 0.01 solidity 50.86 3689 0.0 0.08 37 0.01 monkey 48.26 1367 0.0 0.02 4 0.0 jsonld 48.01 462 0.0 0.02 6 0.0 zephir 42.68 1265 0.0 0.02 4 0.0 crystal 41.92 4217 0.0 0.35 182 0.02 rhtml 41.02 4551 0.0 0.35 135 0.02 stata 40.68 1344 0.0 0.02 10 0.0 idris 39.9 3025 0.0 0.13 38 0.01 raml 39.39 948 0.0 0.03 9 0.0 openscad 37.73 2178 0.0 0.05 21 0.0 red 35.26 1108 0.0 0.01 1 0.0 c2hs-haskell 34.47 1021 0.0 0.01 2 0.0 cycript 33.96 197 0.0 0 0 0.0 applescript 33.51 1304 0.0 0.04 19 0.0 mupad 32.49 178 0.0 0.02 4 0.0 literate-agda 31.38 567 0.0 0.01 1 0.0 boo 31.17 26289 0.0 0.01 2 0.0 sourcepawn 29.53 N17 0.0 0.01 3 0.0 qmake 29.51 3632 0.0 0.32 140 0.02 ragel-in-ruby-host 28.3 888 0.0 0.01 4 0.0 io 27.95 1247 0.0 0.01 4 0.0 desktop 27.65 5021 0.0 0.36 186 0.02 propeller-spin 26.77 625 0.0 0.01 1 0.0 thrift 26.75 1007 0.0 0.08 28 0.01 volt 25.05 1660 0.0 0.02 9 0.0 xproc 24.21 914 0.0 0.02 3 0.0 igor-pro 23.75 388 0.0 0.01 1 0.0 lolcode 23.74 24861 0.0 0 0 0.0 html+eex 21.41 2100 0.0 0.29 135 0.02 logtalk 20.43 1035 0.0 0.06 21 0.0 mirah 20.1 706 0.0 0.04 16 0.0 gnuplot 19.68 889 0.0 0.03 17 0.0 | 2308.07124#94 | 2308.07124#96 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#96 | OctoPack: Instruction Tuning Code Large Language Models | graphviz-dot pawn jsoniq bluespec smali krl maple unrealscript ooc pure-data xquery dcl moonscript awk pike livescript solidity monkey jsonld zephir crystal rhtml stata idris raml openscad red c2hs-haskell cycript applescript mupad literate-agda boo sourcepawn qmake ragel-in-ruby-host io desktop propeller-spin thrift volt xproc igor-pro lolcode html+eex logtalk mirah gnuplot literate-coffeescript jflex emberscript cobol yang rebol linker-script cartocss urweb rmarkdown darcs-patch 85.8 85.42 75.15 72.38 71.38 69.87 68.28 67.67 63.19 62.62 61.96 59.64 59.21 57.18 52.87 51.23 50.86 48.26 48.01 42.68 41.92 41.02 40.68 39.9 39.39 37.73 35.26 34.47 33.96 33.51 32.49 31.38 31.17 29.53 29.51 28.3 27.95 27.65 26.77 26.75 25.05 24.21 23.75 23.74 21.41 20.43 20.1 19.68 19.02 18.61 18.39 17.0 16.94 16.47 16.08 15.92 13.07 13.03 13.01 | 2308.07124#95 | 2308.07124#97 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#97 | OctoPack: Instruction Tuning Code Large Language Models | 1525 580 1343 2500 174 1879 1311 585 3416 603 2237 833 1951 2206 1262 5194 3689 1367 462 1265 4217 4551 1344 3025 948 2178 1108 1021 197 1304 178 567 26289 717 3632 888 1247 5021 625 1007 1660 914 388 24861 2100 1035 706 889 1041 555 1024 24953 597 239 1604 555 304 750 80 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 | 2308.07124#96 | 2308.07124#98 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#98 | OctoPack: Instruction Tuning Code Large Language Models | 0.07 0.01 0.01 0.01 0 0.02 0.01 0.01 0.04 0.01 0.08 0.04 0.02 0.1 0.02 0.13 0.08 0.02 0.02 0.02 0.35 0.35 0.02 0.13 0.03 0.05 0.01 0.01 0 0.04 0.02 0.01 0.01 0.01 0.32 0.01 0.01 0.36 0.01 0.08 0.02 0.02 0.01 0 0.29 0.06 0.04 0.03 0.05 0.01 0.02 0 0.02 0.01 0.08 0.01 0.02 0 0 35 3 6 2 0 4 2 1 15 1 39 19 10 52 6 63 37 4 6 4 182 135 10 38 9 21 1 2 0 19 4 1 2 3 140 4 4 186 1 28 9 3 1 0 135 21 16 17 19 1 7 0 6 3 37 3 6 0 0 25 | 2308.07124#97 | 2308.07124#99 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#99 | OctoPack: Instruction Tuning Code Large Language Models | 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.01 0.0 0.0 0.01 0.0 0.01 0.01 0.0 0.0 0.0 0.02 0.02 0.0 0.01 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.02 0.0 0.0 0.02 0.0 0.01 0.0 0.0 0.0 0.0 0.02 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.01 0.0 0.0 0.0 0.0 # OctoPack: Instruction Tuning Code Large Language Models | 2308.07124#98 | 2308.07124#100 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#100 | OctoPack: Instruction Tuning Code Large Language Models | csound squirrel apl his] latte pony ioke hy uno pan xojo papyrus stan slash supercollider vel smt glyph wisp renpy clips dns-zone sas rouge ec dylan tcesh aspectj netlogo gap fancy coq click capn-proto flux forth ats netlinx clean parrot-assembly alloy Ife gdscript augeas sparql lilypond scilab autoit 2.85 2.84 2.56 217 1.89 1.84 0.86 0.51 0.36 0.34 0.31 0.26 0.25 9.9 9.8 9.46 9.03 8.95 8.74 8.3 7.73 7.56 7.54 72 7.03 6.82 6.52 6.33 6.3 6.1 5.95 5.74 5.74 5.64 5.57 5.51 5.42 5.17 5.07 4.66 4.64 4.58 4.49 4.44 4.31 4.09 4.06 229 531 586 1529 1380 624 373 879 628 637 642 130 540 640 318 747 117 262 421 450 54 269 396 94 280 748 451 140 46 675 330 47 265 383 144 171 227 203 287 460 395 1036 265 375 279 0.01 0.01 0.02 0.03 0.02 0.05 0.04 0.04 0.02 NDA AR BeNeH NAW NORE NCwWOW RnR oCCONN a o COBZARQDCACNH HE wWNwWHcCoHoOoH | 2308.07124#99 | 2308.07124#101 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#101 | OctoPack: Instruction Tuning Code Large Language Models | csound squirrel apl hlsl latte pony ioke hy uno pan xojo papyrus stan slash supercollider vcl smt glyph wisp renpy clips dns-zone sas rouge ec dylan tcsh aspectj netlogo gap fancy coq click capn-proto flux forth ats netlinx clean parrot-assembly alloy lfe gdscript augeas sparql lilypond scilab autoit myghty blitzmax creole harbour piglatin opa sage ston maxscript lsl gentoo-ebuild 12.85 12.84 12.56 12.17 11.89 11.84 10.86 10.51 10.36 10.34 10.31 10.26 10.25 9.9 9.8 9.46 9.03 8.95 8.74 8.3 7.73 7.56 7.54 7.2 7.03 6.82 6.52 6.33 6.3 6.1 5.95 5.74 5.74 5.64 5.57 5.51 5.42 5.17 5.07 4.66 4.64 4.58 4.49 4.44 4.4 4.31 4.09 4.06 3.86 3.74 3.42 3.34 3.17 3.16 3.03 2.85 2.8 2.68 2.58 229 531 586 1529 1380 624 373 879 628 637 642 130 540 640 318 747 117 7 262 421 450 54 269 396 94 280 748 451 140 46 675 80 9 330 47 265 383 144 171 227 203 287 460 395 1036 265 375 279 105 220 337 107 513 211 414 414 47 74 601 | 2308.07124#100 | 2308.07124#102 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#102 | OctoPack: Instruction Tuning Code Large Language Models | 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.01 0.01 0.02 0.03 0.02 0.05 0.04 0.04 0.01 0.05 0 0 0 0.01 0.01 0.04 0.01 0 0.01 0.02 0 0.01 0.01 0.1 0 0.01 0.02 0.02 0 0 0.02 0 0 0.04 0.01 0.01 0.01 0.01 0.01 0.01 0 0.02 0.03 0.04 0.04 0.01 0.02 0 0 0.01 0.01 0.01 0.02 0 0.01 0.01 0 0.01 0.06 | 2308.07124#101 | 2308.07124#103 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#103 | OctoPack: Instruction Tuning Code Large Language Models | 4 4 7 11 7 16 25 12 2 23 0 0 0 4 2 18 3 0 3 3 0 2 1 41 0 2 10 8 0 0 8 0 0 12 3 2 3 1 1 2 0 6 9 13 23 6 10 0 0 1 2 1 11 0 1 6 0 3 16 26 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.01 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 # OctoPack: Instruction Tuning Code Large Language Models | 2308.07124#102 | 2308.07124#104 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#104 | OctoPack: Instruction Tuning Code Large Language Models | nu bro xc J metal mms webidl tea redcode shen pov-ray-sdl x10 brainfuck ninja golo webassembly self labview octave pogoscript d http ecl chuck gosu parrot opal objective-j it gams prolog clarion mask brightscript scaml matlab idl ags-script lookml apacheconf oxygene txl gf renderscript mtml unified-parallel-c dogescript gentoo-eclass 2.38 2.34 2.02 1.81 1.72 1.54 1.51 1.47 1.27 1.2 1.14 1.01 0.96 0.95 0.9 0.86 0.82 0.81 0.8 0.8 0.8 0.74 0.66 0.58 0.52 0.52 0.47 0.46 0.41 0.38 0.28 0.27 0.25 0.24 0.18 0.16 0.15 0.12 0.12 0.11 0.1 0.1 0.09 0.06 0.05 0.05 0.04 0.04 170 333 88 142 151 91 96 29 149 71 104 33 167 187 115 83 15 61 12 74 20 140 99 60 17 69 37 48 18 35 13 37 28 31 29 31 10 59 39 54 13 10 0.0 0.0 0.02 S eo esecess nm SS oo a oo eo o o o o Sccooossoeocosoocooso oso SCSCSCS SoOOSCSCo OS OCOSOSCSCSOSC COS SoCoSoSCSCSoSSO | 2308.07124#103 | 2308.07124#105 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#105 | OctoPack: Instruction Tuning Code Large Language Models | nu bro xc j metal mms webidl tea redcode shen pov-ray-sdl x10 brainfuck ninja golo webassembly self labview octave pogoscript d http ecl chuck gosu parrot opal objective-j kit gams prolog clarion mask brightscript scaml matlab idl ags-script lookml apacheconf oxygene txl gf renderscript mtml unified-parallel-c dogescript gentoo-eclass zimpl irc-log fantom numpy cirru xpages nginx objdump python-traceback realbasic befunge 2.38 2.34 2.02 1.81 1.72 1.54 1.51 1.47 1.27 1.2 1.14 1.01 0.96 0.95 0.9 0.86 0.82 0.81 0.8 0.8 0.8 0.74 0.66 0.58 0.52 0.52 0.47 0.46 0.41 0.38 0.28 0.27 0.25 0.24 0.18 0.16 0.15 0.12 0.12 0.11 0.1 0.1 0.09 0.06 0.05 0.05 0.04 0.04 0.04 0.04 0.03 0.03 0.02 0.02 0.02 0.02 0.02 0.01 0.01 170 333 88 142 151 91 96 29 149 71 104 33 167 187 115 83 15 61 12 74 20 140 48 99 60 17 69 37 48 18 35 13 37 28 31 29 1 31 10 59 9 3 39 54 13 6 10 6 7 9 11 1 4 7 6 1 10 1 2 | 2308.07124#104 | 2308.07124#106 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#106 | OctoPack: Instruction Tuning Code Large Language Models | 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.01 0.01 0 0 0.02 0.01 0.05 0 0 0 0.01 0 0.01 0.03 0 0 0 0 0 0 0 0.03 0.01 0 0 0 0 0 0 0 0 0 0.01 0 0.01 0 0 0 0 0.01 0 0 0 0 0.01 0 0 0 0 0 0 0 0 0.01 0.01 0 0 0 0 | 2308.07124#105 | 2308.07124#107 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#107 | OctoPack: Instruction Tuning Code Large Language Models | 2 3 0 0 4 1 6 0 0 0 5 0 2 14 0 0 0 0 0 0 0 19 4 0 0 0 0 0 0 0 0 0 4 0 1 0 0 0 0 2 0 0 0 0 2 0 0 0 0 0 0 0 0 1 2 0 0 0 0 27 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 # OctoPack: Instruction Tuning Code Large Language Models bison m omgrofl 0.01 0.01 0.01 1 1 1 0.0 0.0 0.0 0 0 0 0 0 0 0.0 0.0 0.0 Table 4: Programming language distribution of COMMITPACK and COMMITPACKFT. Short- cuts: MB=Megabytes, owl=web-ontology-language, pir=parrot-internal-representation, dcl=digital- command-language, mms=module-management-system, gf=grammatical-framework # D DATASET CREATION | 2308.07124#106 | 2308.07124#108 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#108 | OctoPack: Instruction Tuning Code Large Language Models | COMMITPACK We use the GitHub archive available on GCP which contains metadata from GitHub commits up to 2016.4 It contains around 3TB of GitHub activity data for more than 2.8 million GitHub repositories including more than 145 million unique commits, over 2 billion different file paths and the contents of the latest revision for 163 million files.5 We apply the filters in Table 5 to this dataset. The resulting dataset containing only metadata is uploaded at https://hf.co/datasets/big code/commitpackmeta. As the activity dataset only contains commit ids without the actual code changes, we scrape the code from GitHub. We use the metadata and the GitHub API to scrape the changed file prior and after the respective commit. Some repositories referenced in the activity data are no longer accessible, thus we discard them. This results in COMMITPACK with approximately 4 terabytes uploaded at https://hf.co/datasets/bigcode/commitpack. Description Details License Length Noise Single file Opt-out Only keep samples licensed as MIT, Artistic-2.0, ISC, CC0-1.0, EPL-1.0, MPL- 2.0, Apache-2.0, BSD-3-Clause, AGPL-3.0, LGPL-2.1, BSD-2-Clause or with- out license. Only keep code where the commit message has at least 5 and at most 10,000 characters Remove code where the lowercased commit message is any of â add files via uploadâ , "canâ t you see iâ m updating the time?", â commitâ , â create readme.mdâ , â dummyâ , â first commitâ , â heartbeat updateâ , â initial commitâ , â mirroring from micro.blog.â , â no messageâ , â pi pushâ , â readmeâ , â updateâ , â updatesâ , â update _config.yamlâ , â update index.htmlâ , â update readme.mdâ , â update readmeâ , â up- dated readmeâ , â update logâ , â update data.jsâ , â update data.jsonâ , â update data.jsâ , â pi pushâ or starts with â mergeâ | 2308.07124#107 | 2308.07124#109 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#109 | OctoPack: Instruction Tuning Code Large Language Models | Remove samples that contain changes across multiple files Remove samples from repositories owned by users that opted out of The Stack (Kocetkov et al., 2022) # Table 5: COMMITPACK filters. COMMITPACKFT Prior work has shown the importance of careful data filtering to maintain quality (Yin et al., 2018; Dhole et al., 2021; Laurençon et al., 2022; Longpre et al., 2023b). To create a smaller version focused on commits that resemble high-quality instructions, we further filter COMMITPACK to create COMMITPACKFT using the steps outlined in Table 6. We also checked for any contamination with HumanEval (Chen et al., 2021) but did not find any solution or docstring present in COMMIT- PACKFT. This is likely because our commit data only goes up to 2016, which is several years prior to the release of HumanEval. Our filters reduce the dataset by a factor of around 1000 resulting in close to 2 gigabytes uploaded at https://hf.co/datasets/bigcode/commitpackft. To gain a deeper understanding of the rich content within COMMITPACKFT, we analyze commits on its Python subset (56K samples). We first collect the most prevalent commit domain by prompting GPT-4 with: | 2308.07124#108 | 2308.07124#110 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#110 | OctoPack: Instruction Tuning Code Large Language Models | "Iâ d like to know the main types of commits on Github and aim to cover as comprehensively as possible.". Subsequently, we use GPT-4 to classify each sample using the prompt in Figure 5. The task distribution is visualized in Figure 2. # 4https://www.gharchive.org/ 5https://github.blog/2016-06-29-making-open-source-data-more-available/ 28 # OctoPack: Instruction Tuning Code Large Language Models # Description # Description # Details Remove samples where the before code has more than 50,000 characters Remove samples where the after code has 0 characters Remove samples where the before and after code are the same (e.g. file name changes) Remove samples that contain a hashtag (to avoid references to issues) Remove samples where the filename of the code after has an atypical extension for the programming language (e.g. only keep â .pyâ for Python) Remove samples where the filename is contained in the commit message (as we do not use the filename in finetuning) Only keep samples where the commit message has more than 10 and less than 1000 characters Only keep samples where the commit message can be split into more than 4 and less than 1000 space-separated words Remove any appearances of â [skip ci]â , â [ci skip]â , sequences at the beginning or end that are in brackets, sequences at the beginning that end with â :â and strip whitespace at the beginning or end Only keep samples where the message starts with an uppercase letter Only keep samples where the concatenation of the code before, a special token and the code after has at least 50 tokens and at most 768 tokens according to the StarCoder tokenizer Only keep samples where the lowercased commit message starts with any of the words in Table 7 Remove samples where the lowercased commit message contains any of â auto commitâ , â update contributingâ , â <?xmlâ , â merge branchâ , â merge pull requestâ , â signed-off-byâ , "fix that bug where things didnâ t work but now they should", "put the thingie in the thingie", "add a beter commit message", "code review", "//codereview", "work in progress", "wip", "https://", "http://", "| leetcode", "cdpcp", " i ", "iâ ve" , "iâ | 2308.07124#109 | 2308.07124#111 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#111 | OctoPack: Instruction Tuning Code Large Language Models | m" or both "thanks to" and "for" Remove samples where the lowercased commit message has a match for the regular expressions (?:v)?\d+\.\d+\.\d+(?=$|\S), any of ^[a-f0-9]+(?:-[a-f0-9]+)*$, ([a-f0-9]{40}), issue\s*\d+, bug\s*\d+ or feature\s*\d+ Downsample With 90% probability remove samples where the commit message starts with "Bump", "Set version" or "Update version" | 2308.07124#110 | 2308.07124#112 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#112 | OctoPack: Instruction Tuning Code Large Language Models | Table 6: COMMITPACKFT filters applied to COMMITPACK. With the commit message we refer to the commit message subject only, not the body. 29 # OctoPack: Instruction Tuning Code Large Language Models "abortâ , â accelerateâ , â accessâ , â accumulateâ , â addâ , â addressâ , â adjustâ , â advanceâ , â alignâ , â al- lotâ , â allowâ , â amplifyâ , â annotateâ , â appendâ , â applyâ , â archiveâ , â arrangeâ , â attachâ , â augmentâ , â automateâ , â backupâ , â boostâ , â breakâ , â bringâ , â brush upâ , â buildâ , â bumpâ , â callâ , â changeâ , â checkâ , â chooseâ , â clarifyâ , â cleanâ , â clearâ , â cloneâ , â commentâ , â completeâ , â compressâ , â con- catenateâ , â configureâ , â connectâ , â consolidateâ , â convertâ , â copyâ , â correctâ , â coverâ , â createâ , â customizeâ , â cutâ , â deal withâ , â debugâ , â decipherâ , â declareâ , â decommissionâ , â decomplexifyâ , â decompressâ , â decreaseâ , â decryptâ , â defineâ , â deleteâ , â deployâ , â designateâ , â destroyâ , â detachâ , â determineâ , â developâ , â diminishâ , â disableâ , â discardâ , â disentangleâ , â dismantleâ , â divideâ , â documentâ , â downgradeâ , â dropâ , â duplicateâ , â editâ , â embedâ , â emphasizeâ , â enableâ , â encryptâ , â enforceâ , â enhanceâ , â enlargeâ , â enumerateâ , â eradicateâ , â escalateâ , â establishâ , â excludeâ , â exitâ , â expandâ , â expediteâ , â expireâ , â extendâ , â facilitateâ , â fixâ , â formatâ , â gatherâ , â generalizeâ , â haltâ , â handleâ , â hastenâ , â | 2308.07124#111 | 2308.07124#113 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#113 | OctoPack: Instruction Tuning Code Large Language Models | hideâ , â implementâ , â improveâ , â includeâ , â increaseâ , â incrementâ , â indentâ , â indexâ , â inflateâ , â initializeâ , â insertâ , â installâ , â integrateâ , â interpolateâ , â interruptâ , â introduceâ , â isolateâ , â joinâ , â killâ , â leverageâ , â loadâ , â magnifyâ , â maintainâ , â makeâ , â man- ageâ , â markâ , â maskâ , â mendâ , â mergeâ , â migrateâ , â modifyâ , â monitorâ , â moveâ , â multiplyâ , â normalizeâ , â optimizeâ , â orchestrateâ , â orderâ , â packageâ , â paraphraseâ , â pasteâ , â patchâ , â plug â , â prepareâ , â prependâ , â printâ , â provisionâ , â purgeâ , â putâ , â quitâ , â raiseâ , â readâ , â reannotateâ , â rearrangeâ , â rebaseâ , â rebootâ , â rebuildâ , â recommentâ , â recompileâ , â reconfigureâ , â reconnectâ , â rectifyâ , â redactâ , â redefineâ , â reduceâ , â refactorâ , â reformatâ , â refreshâ , â reimplementâ , â rein- forceâ , â relocateâ , â removeâ , â renameâ , â reorderâ , â reorganizeâ , â repackageâ , â repairâ , â rephraseâ , â replaceâ , â repositionâ , â rescheduleâ , â resetâ , â reshapeâ , â resolveâ , â restructureâ , â returnâ , â revertâ , â reviseâ , â revokeâ , â rewordâ , â reworkâ , â rewriteâ , â rollbackâ , â saveâ , â scaleâ , â scrubâ , â secureâ , â selectâ , â sendâ , â setâ , â settleâ , â simplifyâ , â solveâ , â sortâ , â speed upâ , â splitâ , â stabilizeâ , â standard- izeâ , â stipulateâ , â | 2308.07124#112 | 2308.07124#114 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#114 | OctoPack: Instruction Tuning Code Large Language Models | stopâ , â storeâ , â streamlineâ , â strengthenâ , â structureâ , â substituteâ , â subtractâ , â supportâ , â swapâ , â switchâ , â synchronizeâ , â tackleâ , â tagâ , â terminateâ , â testâ , â throwâ , â tidyâ , â transformâ , â transposeâ , â trimâ , â troubleshootâ , â truncateâ , â tweakâ , â unblockâ , â uncoverâ , â undoâ , â unifyâ , â uninstallâ , â unplugâ , â unpublishâ , â unravelâ , â unstageâ , â unsyncâ , â untangleâ , â unwindâ , â updateâ , â upgradeâ , â useâ , â validateâ , â verifyâ , â watchâ , â watermarkâ , â whitelistâ , â withdrawâ , â workâ , â write" Table 7: Commit message starting words allowed in COMMITPACKFT. Please categorize the following commit message, which may fall into more than one category. ### Category Bug fixes, New features, Refactoring/code cleanup, Documentation, Testing, User interface, Dependencies, Configuration, Build system/tooling, Performance improvements, Formatting/Linting, Security, Technical debt repayment, Release management, Accessibility, Deprecation, Logging/In- strumentation, Internationalization ### Commit Message Add the blacklist checking to the bulk ### Classification Bug fixes, New features ### Commit Message {COMMIT_MESSAGE} ### Classification Figure 5: GPT-4 1-shot prompt for classifying commits in COMMITPACKFT. 30 # OctoPack: | 2308.07124#113 | 2308.07124#115 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#115 | OctoPack: Instruction Tuning Code Large Language Models | Instruction Tuning Code Large Language Models xP3x We use a subset of xP3x (Muennighoff et al., 2022b) focusing on code datasets consisting of APPS (Hendrycks et al., 2021), CodeContests (Li et al., 2022b), Jupyter Code Pairs,6 MBPP (Austin et al., 2021), XLCoST (Zhu et al., 2022), Code Complex (Jeon et al., 2022), Docstring Corpus (Barone & Sennrich, 2017), Great Code (Hellendoorn et al., 2019) and State Changes.7 OASST We reuse a filtered variant of OASST (Köpf et al., 2023) from prior work (Dettmers et al., 2023) and apply additional filters to remove responses that refuse to comply with the user request. To compute the programming languages and code fraction for OASST depicted in Table 1, we count all responses containing e.g. â â â python or â â â | 2308.07124#114 | 2308.07124#116 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#116 | OctoPack: Instruction Tuning Code Large Language Models | py for the Python programming language. There are code samples that are not enclosed in backticks or do not specify the language, thus we are likely underestimating the actual fraction of code data for OASST in Table 1. # E COMPARING DATA BEFORE AND AFTER FILTERING In Table 8 we compare word statistics prior to and after filtering COMMITPACK to create COMMIT- PACKFT. The mean commit subject and message length increases suggesting that messages are more informative in COMMITPACKFT. The code lengths decrease significantly as we limit the number of allowed tokens in the filters in Table 6. Notably, the percentage of code changed between pre- and post-commit is 77.6/59.1 = 1.31 (a 31% increase) as opposed to 3269.8/3269.9 = 1.007 (a 0.7% increase). Thus, the filtered data carries significantly more signal per token with fewer repetitions of the code prior to the commit. Metric Before Filter After Filter Difference Subject Length (words) Message Length (words) Pre-Commit Code Length (words) Post-Commit Code Length (words) 5.7±0.02 8.7±0.06 3269.9±298.8 3269.8±299.5 6.9±0.01 9.9±0.05 59.1±0.19 77.6±0.23 +1.28 +1.34 -3210.9 -3214.2 Table 8: The effect of data filters on subject, message, and code lengths. We compare differences in word statistics of COMMITPACK and COMMITPACKFT. # F COMPARING COMMITPACK AND THE STACK In Table 9 we provide statistics on repositories and usernames of COMMITPACK and The Stack (Ko- cetkov et al., 2022). COMMITPACK contains a total of 1,934,255 repositories. Around half (49.3%) of them are also in The Stack. However, The Stack only provides the raw code files of these repositories from some fixed point in time. COMMITPACK contains the changes made to the code files in the form of commits. Thus, the same code file may appear multiple times in COMMITPACK for each change that was made to it. | 2308.07124#115 | 2308.07124#117 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#117 | OctoPack: Instruction Tuning Code Large Language Models | Therefore, The Stack only contains 3 terabytes of data, while COMMITPACK contains close to 4. Statistic (â ) COMMITPACK The Stack 1.2 Shared Repositories Usernames 1,934,255 825,885 18,712,378 6,434,196 954,135 663,050 49.3% 80.3% Table 9: Overlap in repositories and usernames of COMMITPACK and The Stack. # G PRETRAINING ON COMMITPACK Due to the scale of COMMITPACK, it is also adequate as a large-scale pretraining dataset. We have included parts of COMMITPACK during the pretraining of StarCoder (Li et al., 2023b) in the 6https://hf.co/datasets/codeparrot/github-jupyter-text-code-pairs 7https://hf.co/datasets/Fraser/python-state-changes 31 # OctoPack: | 2308.07124#116 | 2308.07124#118 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#118 | OctoPack: Instruction Tuning Code Large Language Models | Instruction Tuning Code Large Language Models format of <commit_before>code_before<commit_msg>message<commit_after> code_after. We also pretrain a new model, named SANTACODERPACK, with the same architec- ture as SantaCoder (Allal et al., 2023) on COMMITPACK using this format. We filter COMMITPACK for our six evaluation languages and samples that fit within 8192 tokens leaving us a total of 35B tokens. Following prior work (Muennighoff et al., 2023), we train on this data repeated close to 4 times for a total of 131B tokens taking 14 days. Detailed hyperparameters are in Appendix M. In Table 10, we benchmark StarCoder and SANTACODERPACK on HUMANEVALFIX using the above-detailed commit format. We find that the commit format leads to very strong performance for StarCoder often surpassing the instruction tuned OCTOCODER from Table 2. However, this pretraining format is not suitable for HUMANEVALEXPLAIN limiting its universality. For SAN- TACODERPACK, we find performance comparable to SantaCoder, including checkpoints at 131B and 236B tokens. SANTACODERPACK performs slightly worse on Python than SantaCoder. We hypothesize that this discrepancy is due to a multilingual tax, as SANTACODERPACK needs to accommodate three additional coding languages (Go, C++ and Rust). SantaCoder has thus more capacity allocated to Python, JavaScript, and Java. SANTACODERPACK may also be bottlenecked by its small model size of 1.1B parameters. | 2308.07124#117 | 2308.07124#119 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#119 | OctoPack: Instruction Tuning Code Large Language Models | More research into what exactly happens during pretraining (Xia et al., 2022; Biderman et al., 2023a) and how to unify pretraining and instruction tuning are needed. Prior work has also found that including raw code data during pretraining benefits some natural language tasks (Muennighoff et al., 2023). Future work may consider the effects of including code commit data on natural language tasks. Model (â ) Python JavaScript Java Go C++ Rust Avg. SantaCoder (131B tokens) Instruct Format SantaCoder (236B tokens) Instruct Format SANTACODERPACK (131B tokens) Commit Format 6.5 7.1 3.2 4.2 4.2 4.9 2.9 1.8 1.8 - - 3.6 - - 4.2 - - 1.7 - - 3.3 StarCoder Commit Format 32.7 33.6 33.0 31.9 31.6 20.2 30.5 Table 10: Zero-shot pass@1 (%) performance on HUMANEVALFIX of pretraining experiments. H LINE DIFF FORMAT FOR FIXING CODE We finetune SantaCoder to experiment with different formatting strategies for fixing bugs comparing full code generation and code diff generation. When fixing a code bug, usually only a small part of the code needs to change. Only generating the code diff corresponding to the necessary change can make inference significantly more efficient by avoiding repeated characters in the output generation. We finetune SantaCoder on the Python, Java and JavaScript subset of COMMITPACKFT. We exclude other languages as SantaCoder has only been pretrained on these three languages (Allal et al., 2023). Commit Format For full code generation, we reuse the format that we employed for commits in StarCoder pretraining from Appendix G: <commit_before>code_before<commit_msg> message<commit_after>code_after. However, SantaCoder has not seen this format during pretraining and does not have special tokens like StarCoder for the delimiters. Thus, for SantaCoder e.g. <commit_before> is tokenized as [â <â , â commitâ , â _â , â beforeâ , â >â ]. | 2308.07124#118 | 2308.07124#120 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#120 | OctoPack: Instruction Tuning Code Large Language Models | Unified diff format For code diff generation, a simple solution is using the unified diff format,8 which is a standard way to display changes between code files in a compact and readable format (Lehman et al., 2022; Jung, 2021; Xu et al., 2022b; Monperrus et al., 2021). We depict an example of this format in Figure 6. However, the unified diff format still requires the model to output several unchanged lines below and after the actual modification. Thus, its efficiency gains are limited and there is still unnecessary duplication of the input. Line diff format To address the inefficiencies of the unified diff format, we propose the line diff format for representing code differences. There are two requirements for our format: (1) The diff # 8https://en.wikipedia.org/wiki/Diff#Unified_format 32 # OctoPack: Instruction Tuning Code Large Language Models | 2308.07124#119 | 2308.07124#121 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#121 | OctoPack: Instruction Tuning Code Large Language Models | from typing import List from typing import List def has_close_elements(numbers: List[float], threshold: float) -> bool: for idx, elem in enumerate(numbers): for idx2, elem2 in enumerate(numbers) : if idx != idx2: : if idx != idx2: distance = elem - elem2 if distance < threshold: return True return False return False @@ -4,7 +4,7 @@ for idx, elem in enumerate(numbers): for idx2, elem2 in enumerate(numbers): if idx != idx2: - + distance = elem - elem2 distance = abs(elem - elem2) if distance < threshold: return True def has_close_elements(numbers: List[float], # threshold: float) -> bool: for idx, elem in enumerate(numbers): # for idx2, elem2 in enumerate(numbers) # distance = abs(elem - elem2) if distance < threshold: # return True | 2308.07124#120 | 2308.07124#122 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#122 | OctoPack: Instruction Tuning Code Large Language Models | Figure 6: The first problem from the HUMANEVALFIX Python split and the necessary change to fix the bug in unified diff format. Top: Code with and without the bug from Figure 11. Bottom: Necessary change to fix the bug in unified diff format. - + 7 7 distance = elem - elem2 distance = abs(elem - elem2) # Figure 7: The line diff format for the problem from Figure 6. can be unambiguously applied to the code before the commit to generate the code after the commit, and (2) the code diff should be as short as possible to maximize efficiency by avoiding the inclusion of unchanged code. In Figure 7, we show how our format addresses these. The line diff format keeps track of each change sequentially line-by-line to ensure the code can be correctly modified. By focusing only on the lines that change, we reduce the number of characters in the diff by 70% compared to the unified diff representation in Figure 6. Both the unified diff format and our line diff format require the model to predict line numbers. This is very challenging when training on raw code as models need to count and keep track of line numbers. To simplify line number prediction, we automatically add line numbers to the raw code in the finetuning dataset for the line diff format. This allows the model to simply copy the line number into the output simplifying the diff generation. However, it diminishes efficiency slightly by adding additional input tokens that the model needs to process. As summarized in Table 11, finetuning SantaCoder using the line diff format significantly improves performance compared to prior finetuning on HUMANEVALFIX across all languages. It also out- performs finetuning using the commit format, which only provides gains on JavaScript and Java compared to no finetuning. However, finetuning on the diff format may converge slower than the commit format as the diff format significantly differs from the raw code seen during pretraining. Figures 8, 9, 10 show line diff generations of our model. A limitation of our current line diff im- plementation is that it does not handle code insertion well. The inserted lines may change the line numbers of all following lines, which can result in problems when applying the diff. Further, the diff format is not useful for HUMANEVALEXPLAIN and HUMANEVALSYNTHESIZE. | 2308.07124#121 | 2308.07124#123 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#123 | OctoPack: Instruction Tuning Code Large Language Models | Future work could consider training models that can both be instructed to use the line diff format, such as for HUMANEVALFIX, but also explain or synthesize code without producing a diff. 33 # OctoPack: Instruction Tuning Code Large Language Models Model Python JavaScript Java SantaCoder SantaCoder + Commit format finetuning SantaCoder + Line diff format finetuning 7.1 3.8 9.9 4.2 5.3 9.7 1.8 9.2 10.0 Table 11: Zero-shot pass@1 (%) performance on HUMANEVALFIX of SantaCoder formatting experiments. - 3 3 + - 12 + 12 - 14 - 15 - 16 - 17 } + 14 + 15 + 16 + 17 + 18 + 19 + 20 + 21 + 22 + 23 } let depth = 0, max_depth = 0; let depth = 0, max_depth = 1; return max_depth; return max_depth - 1; return paren_string.split(â â ) .filter(x => x != â â ) .map(x => parseParenGroup(x)); let paren_list = paren_string.split(â â ); let nested_parens = paren_list.map(x => parseParenGroup(x)); return nested_parens.reduce((prev, curr) => { if (prev == 0) { return curr; } else { return curr - 1; } }); Figure 8: A line diff generation of our model on a JavaScript HUMANEVALFIX problem. 18 + 18 if (current_depth < 0) { if (current_depth < 0 && current_string.length() > 0) { - 18 if (current_depth < 0) { 18 if (current_depth < 0 && current_string.length() > 0) Figure 9: A line diff generation of our model on a Java HUMANEVALFIX problem. - - + + 2 3 2 3 for i, l1 in enumerate(l): for j in range(i, len(l)): for i in range(0, len(l)): for j in range(i+1, len(l)): # Figure 10: A line diff generation of our model on a Python HUMANEVALFIX problem. | 2308.07124#122 | 2308.07124#124 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#124 | OctoPack: Instruction Tuning Code Large Language Models | 34 # OctoPack: Instruction Tuning Code Large Language Models I RESULTS ON HUMANEVALFIXDOCS The default version of HUMANEVALFIX does not include docstrings, but only provides the unit tests to the model alongside the buggy function. An alternative is providing docstrings as the source of ground truth for the model to fix the buggy function. Solving from docstrings is generally easier for models than from tests, as models can also solve it via pure code synthesis without looking at the buggy function at all. We provide results of some models on this variant in Table 12. For StarCoder, we distinguish two prompting formats: An instruction to fix bugs like in Figure 3 or the commit format it has seen during pretraining (Appendix G). OCTOCODER performs very strongly on this variant. Diff Codegen 2B (Bradley et al., 2023) performs poorly as its predicted code diffs are often irrelevant to the actual bug, see Figure 38. Model Python JavaScript Java Go C++ Rust Avg. Non-permissive models GPT-4 88.4 80.5 82.9 81.1 82.3 68.9 80.7 Permissive Models Diff Codegen 2B StarCoder Commit Format StarCoder Instruct Format OCTOCODER 0.0 43.5 41.7 53.8 0.1 29.3 30.7 48.1 0.0 45.7 44.3 54.3 0.3 31.9 34.5 54.9 0.0 28.1 28.7 49.2 0.2 19.4 14.0 32.1 0.1 27.1 26.5 48.7 Table 12: Zero-shot pass@1 (%) performance on HUMANEVALFIXDOCS. J FULL INSTRUCTION DATA ABLATIONS We results of We provide than COM- try some additional mixtures, be to MITPACKFT + OASST. We <commit_before>old code<commit_msg>message<commit_after>new code for COMMITPACKFT and <commit_before><commit_msg>input<commit_after>output for OASST referred to as the "Formatting" ablation. | 2308.07124#123 | 2308.07124#125 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#125 | OctoPack: Instruction Tuning Code Large Language Models | We hypothesized that aligning the formatting during instruction tuning with the commit format that we used during pretraining (Appendix G) would improve performance. While it seems to improve performance for HUMANEVALFIX compared to our default formatting (see Figure 17), it reduces performance on the other tasks leading to a worse average score of 35.3 in Table 13. "Target Loss" refers to an ablation where we mask loss for inputs as is commonly done during instruction tuning (Muennighoff et al., 2022b). While this leads to the best performance on HUMANEVALSYNTHESIZE, its average performance is worse compared to COMMITPACKFT + OASST, where the loss is computed over the full sequence. We also perform an ablation where we manually select 1178 high-quality samples (725 from OASST and 89, 61, 86, 72, 70 and 75 from COMMITPACKFT for Python, JavaScript, Java, Go, C++ and Rust, respectively). However, this manual selection did not outperform random selection for OCTOCODER. It performed better for OCTOGEEX, however, hence we used it for OCTOGEEX. We hypothesize that our models could achieve significantly better performance by further improving the quality of the instruction data beyond. This may necessitate very careful human selection of samples and manual editing of the data to ensure a uniform style in the outputs. We leave such explorations to future work. | 2308.07124#124 | 2308.07124#126 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#126 | OctoPack: Instruction Tuning Code Large Language Models | 35 # OctoPack: Instruction Tuning Code Large Language Models Instruction Tuning Dataset (â ) HUMANEVALPACK Python Fix Explain Synthesize Average Without instruction tuning 8.7 0.0 33.6 14.1 Self-Instruct (SI) OASST SI + OASST xP3x + OASST COMMITPACKFT + OASST COMMITPACKFT + OASST (Formatting) COMMITPACKFT + OASST (Target loss) COMMITPACKFT + OASST (Manual) COMMITPACKFT + xP3x + OASST COMMITPACKFT + SI + xP3x + OASST 23.6 23.1 24.9 28.4 30.4 31.1 29.8 27.2 30.9 31.4 0.6 34.5 28.7 28.4 35.1 28.9 31.2 29.6 29.5 33.8 43.0 46.4 46.2 45.0 46.2 45.8 47.8 45.8 45.9 46.0 22.2 34.7 33.3 33.9 37.2 35.3 36.3 34.2 35.4 37.1 Table 13: Zero-shot pass@1 (%) performance across the Python split of HUMANEVALPACK for StarCoder instruction tuning data ablations. # K HUMANEVALFIX BUG TYPES Table 14 contains an overview of bugs that were manually added by one of the authors to HumanEval solutions for the construction of HUMANEVALFIX. Figures 11-16 contain an example of each type from the Python split. The bug type for each problem is the same across all programming languages in HUMANEVALFIX, but for a few samples it affects a different part of the solution due to the code solutions not being perfectly parallel across languages. | 2308.07124#125 | 2308.07124#127 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#127 | OctoPack: Instruction Tuning Code Large Language Models | Bug type Subtype Explanation Example Missing logic Excess logic Wrong logic Value misuse Operator misuse Variable misuse Function misuse Misses code needed to solve the problem Figure 11 Figure 12 Contains excess code leading to mistakes Figure 13 An incorrect value is used Figure 14 An incorrect operator is used Figure 15 An incorrect variable is used Figure 16 An incorrect function is used Total Count 33 31 44 25 23 8 164 # Table 14: HUMANEVALFIX bug types. 36 # OctoPack: | 2308.07124#126 | 2308.07124#128 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#128 | OctoPack: Instruction Tuning Code Large Language Models | Instruction Tuning Code Large Language Models from typing import List from typing import List def has_close_elements(numbers: List[float ], threshold: float) -> bool: """ Check if in given list of numbers, are any two numbers closer to each other than given threshold. >>> has_close_elements([1.0, 2.0, 3.0], 0.5) False >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) True """ for idx, elem in enumerate(numbers): for idx2, elem2 in enumerate( numbers): if idx != idx2: distance = abs(elem - elem2) if distance < threshold: return True # def has_close_elements(numbers: List[float ], threshold: float) -> bool: """ Check if in given list of numbers, are any two numbers closer to each other than given threshold. >>> has_close_elements([1.0, 2.0, 3.0], 0.5) False >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) True """ for idx, elem in enumerate(numbers): for idx2, elem2 in enumerate( numbers): if idx != idx2: distance = elem - elem2 if distance < threshold: return True return False return False | 2308.07124#127 | 2308.07124#129 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#129 | OctoPack: Instruction Tuning Code Large Language Models | Figure 11: Missing logic bug example. The buggy code (right) misses the â absâ statement. def truncate_number(number: float) -> float: """ Given a positive floating point number, it can be decomposed into and integer part (largest integer smaller than given number) and decimals (leftover part always smaller than 1). Return the decimal part of the number. >>> truncate_number(3.5) 0.5 """ return number % 1.0 def truncate_number(number: float) -> float: """ Given a positive floating point number, it can be decomposed into and integer part (largest integer smaller than given number) and decimals (leftover part always smaller than 1). Return the decimal part of the number. >>> truncate_number(3.5) 0.5 """ return number % 1.0 + 1.0 | 2308.07124#128 | 2308.07124#130 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#130 | OctoPack: Instruction Tuning Code Large Language Models | Figure 12: Excess logic bug example. The buggy code (right) incorrectly adds 1 to the result. from typing import List, Tuple from typing import List, Tuple def sum_product(numbers: List[int]) -> Tuple[int, int]: """ For a given list of integers, return a tuple consisting of a sum and a product of all the integers in a list. Empty sum should be equal to 0 and empty product should be equal to 1. >>> sum_product([]) (0, 1) >>> sum_product([1, 2, 3, 4]) (10, 24) """ sum_value = 0 prod_value = 1 for n in numbers: sum_value += n prod_value *= n return sum_value, prod_value def sum_product(numbers: List[int]) -> Tuple[int, int]: """ For a given list of integers, return a tuple consisting of a sum and a product of all the integers in a list. Empty sum should be equal to 0 and empty product should be equal to 1. >>> sum_product([]) (0, 1) >>> sum_product([1, 2, 3, 4]) (10, 24) """ sum_value = 0 prod_value = 0 for n in numbers: sum_value += n prod_value *= n return sum_value, prod_value | 2308.07124#129 | 2308.07124#131 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#131 | OctoPack: Instruction Tuning Code Large Language Models | Figure 13: Value misuse bug example. The buggy code (right) incorrectly initializes the product to 0. 37 # OctoPack: Instruction Tuning Code Large Language Models from typing import List from typing import List # def below_zero(operations: List[int]) -> bool: """ Youâ re given a list of deposit and withdrawal operations on a bank account that starts with zero balance. Your task is to detect if at any point the balance of account fallls below zero, and at that point function should return True. Otherwise it should return False. >>> below_zero([1, 2, 3]) False >>> below_zero([1, 2, -4, 5]) True """ balance = 0 for op in operations: balance += op if balance < 0: return True # def below_zero(operations: List[int]) -> bool: """ Youâ | 2308.07124#130 | 2308.07124#132 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#132 | OctoPack: Instruction Tuning Code Large Language Models | re given a list of deposit and withdrawal operations on a bank account that starts with zero balance. Your task is to detect if at any point the balance of account fallls below zero, and at that point function should return True. Otherwise it should return False. >>> below_zero([1, 2, 3]) False >>> below_zero([1, 2, -4, 5]) True """ balance = 0 for op in operations: balance += op if balance == 0: return True return False # return False | 2308.07124#131 | 2308.07124#133 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#133 | OctoPack: Instruction Tuning Code Large Language Models | Figure 14: Operator misuse bug example. The buggy code (right) incorrectly checks for equality with 0. from typing import List from typing import List def mean_absolute_deviation(numbers: List[ float]) -> float: """ For a given list of input numbers, calculate Mean Absolute Deviation around the mean of this dataset. Mean Absolute Deviation is the average absolute difference between each element and a centerpoint (mean in this case): MAD = average | x - x_mean | >>> mean_absolute_deviation([1.0, 2.0, 3.0, 4.0]) 1.0 """ mean = sum(numbers) / len(numbers) return sum(abs(x - mean) for x in numbers) / len(numbers) def mean_absolute_deviation(numbers: List[ float]) -> float: """ For a given list of input numbers, calculate Mean Absolute Deviation around the mean of this dataset. Mean Absolute Deviation is the average absolute difference between each element and a centerpoint (mean in this case): MAD = average | x - x_mean | >>> mean_absolute_deviation([1.0, 2.0, 3.0, 4.0]) 1.0 """ mean = sum(numbers) / len(numbers) return sum(abs(x - mean) for x in numbers) / mean Figure 15: Variable misuse bug example. The buggy code (right) incorrectly divides by the mean. | 2308.07124#132 | 2308.07124#134 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#134 | OctoPack: Instruction Tuning Code Large Language Models | def flip_case(string: str) -> str: """ For a given string, flip lowercase characters to uppercase and uppercase to lowercase. >>> flip_case(â Helloâ ) â hELLOâ """ return string.swapcase() def flip_case(string: str) -> str: """ For a given string, flip lowercase characters to uppercase and uppercase to lowercase. >>> flip_case(â Helloâ ) â hELLOâ """ return string.lower() Figure 16: Function misuse bug example. The buggy code (right) incorrectly uses the â lower()â function. 38 # OctoPack: Instruction Tuning Code Large Language Models # L PERFORMANCE BREAKDOWN BY HUMANEVALFIX BUG TYPE All bugs in HUMANEVALFIX are categorized into bug types as described in Appendix K. In Table 15, we break down the HUMANEVALFIX performance of select models from Table 2 by bug type. We find that models struggle most with bugs that require removing excess logic (e.g. Figure 12). WizardCoder is only able to solve 11% of excess logic bugs while solving about four times more bugs that relate to value misuse. The performance of OCTOGEEX and OCTOCODER is more stable than WizardCoder across the different bug types, possibly due to the diversity of COMMITPACKFT as displayed in Figure 2. GPT-4 performs best across all bug types. Bug type Subtype OCTOGEEX OCTOCODER WizardCoder GPT-4 Missing logic Excess logic Wrong logic Value misuse Operator misuse Variable misuse Function misuse 24.2 16.3 33.2 32.8 35.7 25.0 24.4 16.9 34.7 42.0 33.7 37.5 31.2 11.0 45.1 34.4 30.4 37.5 Overall 28.1 30.4 31.8 45.5 38.7 50.0 56.0 43.5 50.0 47.0 Table 15: Breakdown of HUMANEVALFIX Python pass@1 (%) performance by bug type for select models. Statistics for each bug type are in Table 14. # M HYPERPARAMETERS | 2308.07124#133 | 2308.07124#135 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#135 | OctoPack: Instruction Tuning Code Large Language Models | StarCoder finetuning (OCTOCODER) For all experiments finetuning StarCoder, we use a learning rate of 5e-4 with a cosine schedule and linear warmup. We use a batch size of 32 and train for up to one epoch, as we did not observe benefits from more steps. OCTOCODER was trained for 35 steps with a sequence length of 2048 and packing corresponding to 2.2 million total finetuning tokens. CodeGeeX finetuning (OCTOGEEX) To create OCTOGEEX, we finetune CodeGeeX2 for 35 steps with a batch size of 48 and a learning rate of 5e-5 largely following the OCTOCODER setup. SantaCoder finetuning For all experiments finetuning SantaCoder, we use a learning rate of 5e-5 with a cosine schedule and linear warmup. We finetune SantaCoder using a batch size of 64 for up to 200,000 steps. SantaCoder pretraining (SANTACODERPACK) We follow the setup from Allal et al. (2023) to pretrain on COMMITPACK with the exception of using a sequence length of 8192 and using the tokenizer from StarCoder, which has special tokens for the commit format delimiters (see Appendix G). SANTACODERPACK utilizes Multi Query Attention (MQA) (Shazeer, 2019) but removes Fill-in-the-Middle (FIM) (Bavarian et al., 2022). We conducted pretraining on 32 A100 GPUs, totaling 250k training steps, with a global batch size of 64. Other hyperparameter settings follow SantaCoder, including using Adam with β1 = 0.9, β2 = 0.95, ϵ = 10â 8, and a weight decay of 0.1. The learning rate is set to 2 à 10â 4 and follows a cosine decay after warming up for 2% of the training steps. # N PROMPTS The prompting format can significantly impact performance. In the spirit of true few-shot learn- ing (Perez et al., 2021) we do not optimize prompts and go with the format provided by the respective model authors or the most intuitive format if none is provided. | 2308.07124#134 | 2308.07124#136 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#136 | OctoPack: Instruction Tuning Code Large Language Models | For each task, we define an instruction, an optional context and an optional function start (Table 16). The function start is provided to make sure the model directly completes the function without having to search for the function in the model output. These three parts are then combined in slightly different ways for each model (Figures 17-23). We implement our evaluation using open-source frameworks (Ben Allal et al., 2022; Gao et al., 2021). 39 # OctoPack: | 2308.07124#135 | 2308.07124#137 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#137 | OctoPack: Instruction Tuning Code Large Language Models | Instruction Tuning Code Large Language Models # HUMANEVALFIX # Instruction Context # Instruction # Context Fix bugs in has_close_elements. from typing import List # def has_close_elements(numbers: List[float], threshold: float) -> bool: # for idx, elem in enumerate(numbers): # for idx2, elem2 in enumerate(numbers): # if idx != idx2: # distance = elem - elem2 if distance < threshold: # return True # return False # Function start from typing import List def has_close_elements(numbers: List[float], threshold: float) -> bool: HUMANEVALEXPLAIN Instruction (Describe) Context (Describe) Provide a concise natural language description of the code using at most 213 characters. from typing import List def has_close_elements(numbers: List[float], threshold: float) -> bool: for idx, elem in enumerate(numbers): for idx2, elem2 in enumerate(numbers): if idx != idx2: distance = abs(elem - elem2) if distance < threshold: return True return False Instruction (Synthesize) Write functional code in Python according to the description. Context (Synthesize) {Description generated by the model} Function start (Synthesize) from typing import List def has_close_elements(numbers: List[float], threshold: float) -> bool: HUMANEVALSYNTHESIZE Instruction Write a Python function â has_close_elements(numbers: List[float], thresh- old: float) -> boolâ to solve the following problem: Check if in given list of numbers, are any two numbers closer to each other than given threshold. »> has_close_elements([1.0, 2.0, 3.0], 0.5) False »> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) True from typing import List Function start def has_close_elements(numbers: List[float], threshold: float) -> bool: """ Check if in given list of numbers, are any two numbers closer to each other than given threshold. >>> has_close_elements([1.0, 2.0, 3.0], 0.5) False >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) True """ | 2308.07124#136 | 2308.07124#138 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#138 | OctoPack: Instruction Tuning Code Large Language Models | Table 16: Instructions and function examples used. If no function start or no context is present, that part is not added to the prompt (and the preceding newline is also removed). 40 # OctoPack: Instruction Tuning Code Large Language Models Question: {instruction} {context} Answer: {function_start} # Figure 17: OCTOCODER and OCTOGEEX prompting format Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} {context} ### Response: {function_start} Figure 18: WizardCoder prompting format from their codebase.9 Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} {context} ### Response:{function_start} Figure 19: InstructCodeT5+ decoder prompting format from their codebase.10 The encoder receives the same prompt without the function start. 9https://github.com/nlpxucan/WizardLM/blob/9c6fb34d012d60dc4f31348ee0a8 e35335c04564/WizardCoder/src/humaneval_gen.py#L38 10https://github.com/salesforce/CodeT5/blob/main/CodeT5%2B/humaneval/gen erate_codet5p.py#L89 11https://huggingface.co/HuggingFaceH4/starchat-beta 41 # OctoPack: Instruction Tuning Code Large Language Models <|system|> <|end|> <|user|> {instruction} {context}<|end|> <|assistant|> {function_start} Figure 20: StarChat-β prompting format from their documentation.11 {context} {instruction} {function_start} Figure 21: Default prompting format (StarCoder, BLOOMZ). Used for Figure 3. {context} {instruction} Figure 22: GPT-4 prompting format. Same as Figure 21 but excluding the function start, as we found the model would not continue from the provided function start, likely due to the chat mode. {context} {instruction} Start your code with: {func_start} Figure 23: GPT-4 prompting format for HUMANEVALEXPLAIN (Synthesize). | 2308.07124#137 | 2308.07124#139 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#139 | OctoPack: Instruction Tuning Code Large Language Models | We include the function start for the synthesize part of HUMANEVALEXPLAIN in case the function name is not included in the code description generated by GPT-4. <NME> {filename} <BEF> {context} <MSG> {instruction} <DFF> # Figure 24: Prompting format for Diff Codegen 2B evaluated in Appendix I. 42 # OctoPack: Instruction Tuning Code Large Language Models # O EXAMPLES O.1 OCTOCODER # Input: # Question: Fix bugs in sum_product. #include<stdio.h> #include<vector> using namespace std; #include<algorithm> #include<math.h> #include<stdlib.h> vector<int> sum_product(vector<int> numbers){ int sum=0,product=0; for (int i=0;i<numbers.size();i++) { sum+=numbers[i]; product*=numbers[i]; } return {sum,product}; } Answer: #include<stdio.h> #include<vector> using namespace std; #include<algorithm> #include<math.h> #include<stdlib.h> vector<int> sum_product(vector<int> numbers){ | 2308.07124#138 | 2308.07124#140 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#140 | OctoPack: Instruction Tuning Code Large Language Models | OCTOCODER: } int sum=0,product=1; for (int i=0;i<numbers.size();i++) { sum+=numbers[i]; product*=numbers[i]; } return {sum,product}; Figure 25: Correct solution from OCTOCODER to a HUMANEVALFIX C++ problem. The corresponding Python problem is depicted in Figure 13. 43 # OctoPack: Instruction Tuning Code Large Language Models # Input: Question: Write a Java function â public String decodeShift(String s)â to solve the following problem: takes as input string encoded with encodeShift function. Returns decoded string. Answer: import java.util.*; import java.lang.*; class Solution { /** returns encoded string by shifting every character by 5 in the alphabet. */ public String encodeShift(String s) { StringBuilder sb = new StringBuilder(); for (char ch : s.toCharArray()) { sb.append((char) (â aâ + ((ch + 5 - â aâ ) % 26))); } return sb.toString(); } /** takes as input string encoded with encodeShift function. Returns decoded string. */ public String decodeShift(String s) { OCTOCODER: } StringBuilder sb = new StringBuilder(); for (char ch : s.toCharArray()) { sb.append((char) (â aâ + ((ch - â aâ - 5) % 26))); } return sb.toString(); } Figure 26: Incorrect solution from OCTOCODER to a HUMANEVALSYNTHESIZE Java problem. The correct solution would be to add 21 to ch instead of subtracting 5 from it. 44 | 2308.07124#139 | 2308.07124#141 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#141 | OctoPack: Instruction Tuning Code Large Language Models | # OctoPack: Instruction Tuning Code Large Language Models # Input: Question: Provide a concise natural language description of the code using at most 671 characters. func FruitDistribution(s string,n int) int { split := strings.Split(s, " ") for _, i := range split { atoi, err := strconv.Atoi(i) if err != nil { continue } n = n - atoi } return n } Answer: OCTOCODER: The function FruitDistribution takes two arguments, a string s and an integer n. The function splits the string s into a slice of strings using the space character as the delimiter. The function iterates over the slice and converts each string to an integer using the strconv.Atoi function. If the conversion is successful, the integer is subtracted from n. The function returns the value of n after all conversions have been performed. # Input (without previous input): Question: Write functional code in Go according to the description. The function FruitDistribution takes two arguments, a string s and an integer n. The function splits the string s into a slice of strings using the space character as the delimiter. The function iterates over the slice and converts each string to an integer using the strconv.Atoi function. If the conversion is successful, the integer is subtracted from n. The function returns the value of n after all conversions have been performed. | 2308.07124#140 | 2308.07124#142 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#142 | OctoPack: Instruction Tuning Code Large Language Models | Answer: func FruitDistribution(s string,n int) int { OCTOCODER: } fruits := strings.Split(s, " ") for _, fruit := range fruits { if num, err := strconv.Atoi(fruit); err == nil { n -= num } } return n Figure 27: Correct solution from OCTOCODER to a HUMANEVALEXPLAIN Go problem. Given its own description OCTOCODER writes a correct solution to the problem. Its solution is more compact and efficient than the original solution. | 2308.07124#141 | 2308.07124#143 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#143 | OctoPack: Instruction Tuning Code Large Language Models | 45 # OctoPack: Instruction Tuning Code Large Language Models O.2 GPT-4 # User: Write a Go function â func RescaleToUnit(numbers []float64) []float64â to solve the following problem: Given a list of numbers (of at least two elements), apply a linear transform to that list, such that the smallest number will become 0 and the largest will become 1 >>> RescaleToUnit([1.0, 2.0, 3.0, 4.0, 5.0]) [0.0, 0.25, 0.5, 0.75, 1.0] | 2308.07124#142 | 2308.07124#144 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#144 | OctoPack: Instruction Tuning Code Large Language Models | GPT-4: Here is a Go function that solves the problem: package main import ( "fmt" "math" ) func RescaleToUnit(numbers []float64) []float64 { min := numbers[0] max := numbers[0] // Find the min and max values in the list for _, number := range numbers { if number < min { min = number } if number > max { max = number } } // Apply the linear transform to each number in the list for i, number := range numbers { numbers[i] = (number - min) / (max - min) } return numbers } func main() { fmt.Println(RescaleToUnit([]float64{1.0, 2.0, 3.0, 4.0, 5.0})) } "fmt" "math" ) fmt.Println(RescaleToUnit([]float64{1.0, 2.0, 3.0, 4.0, 5.0})) This function first finds the minimum and maximum values in the list of numbers. Then it applies the linear transformation to each number in the list. The transformation is â (number - min) / (max - min)â , which scales the number such that â minâ becomes 0 and â maxâ becomes 1. The transformed numbers are returned in a new list. Figure 28: Incorrect solution from GPT-4 to a HUMANEVALSYNTHESIZE Go problem. GPT-4 includes an import that it does not use (math) which leads to a compilation error in Golang and thus the function does not pass the test cases. | 2308.07124#143 | 2308.07124#145 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#145 | OctoPack: Instruction Tuning Code Large Language Models | 46 # OctoPack: Instruction Tuning Code Large Language Models # User: Provide a concise natural language description of the code using at most 246 characters. # def truncate_number(number: float) -> float: return number % 1.0 GPT-4: The code defines a function that takes a float as input and returns its fractional part by using the modulus operator with 1.0. # User (without previous conversation history): The code defines a function that takes a float as input and returns its fractional part by using the modulus operator with 1.0. Write functional code in Python according to the description. Start your code with: | 2308.07124#144 | 2308.07124#146 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#146 | OctoPack: Instruction Tuning Code Large Language Models | def truncate_number(number: float) -> float: GPT-4: def truncate_number(number: float) -> float: return number % 1.0 Figure 29: Correct solution from GPT-4 to a HUMANEVALEXPLAIN Python problem. GPT-4 writes a description that is significantly shorter than the docstring and thus within the character limit. It then generates the perfect solution from its own description. The docstring for this problem can be seen in Figure 12. | 2308.07124#145 | 2308.07124#147 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#147 | OctoPack: Instruction Tuning Code Large Language Models | 47 # OctoPack: Instruction Tuning Code Large Language Models User: fn has_close_elements(numbers:Vec<f32>, threshold: f32) -> bool{ for i in 0..numbers.len(){ for j in 1..numbers.len(){ if i != j { let distance:f32 = numbers[i] - numbers[j]; if distance < threshold{ return true; } } } } return false; } def check(has_close_elements): assert has_close_elements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) == True assert has_close_elements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) == False assert has_close_elements([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) == True assert has_close_elements([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) == False assert has_close_elements([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) == True assert has_close_elements([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) == True assert has_close_elements([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) == False check(has_close_elements) Fix bugs in has_close_elements. check(has_close_elements) Fix bugs in has_close_elements. | 2308.07124#146 | 2308.07124#148 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#148 | OctoPack: Instruction Tuning Code Large Language Models | GPT-4: The function has_close_elements is supposed to check if there are any two elements in the vector that are closer than a given threshold. However, there are a few issues with the current implementation: 1. The distance calculation is not correct. It should be the absolute difference between the two numbers, not just the difference. 2. The second loop should start from i+1, not 1. This is because we only need to compare each number with the numbers after it, not the numbers before it. Here is the corrected function: # fn has_close_elements(numbers:Vec<f32>, threshold: f32) -> bool{ for i in 0..numbers.len(){ for j in i+1..numbers.len(){ let distance:f32 = (numbers[i] - numbers[j]).abs(); if distance < threshold{ return true; } } } return false; } This function will now correctly check if there are any two elements in the vector that are closer than the given threshold. Figure 30: Correct solution from GPT-4 to a HUMANEVALFIX Rust problem. GPT-4 correctly fixes the bug by adding the necessary â | 2308.07124#147 | 2308.07124#149 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#149 | OctoPack: Instruction Tuning Code Large Language Models | absâ statement to compute the absolute value. It additionally improves the efficiency of the function by starting the inner loop from â i+1â instead of 1 like in the original solution. As some problems in HUMANEVALPACK are not perfectly efficient, evaluating the ability of models to improve the efficiency of the solutions is another interesting benchmarking direction (Yetistiren et al., 2022). 48 # OctoPack: Instruction Tuning Code Large Language Models User: def car_race_collision(n: int): return n**3 def check(car_race_collision): assert car_race_collision(2) == 4 assert car_race_collision(3) == 9 assert car_race_collision(4) == 16 assert car_race_collision(8) == 64 assert car_race_collision(10) == 100 check(car_race_collision) # Fix bugs in car_race_collision. | 2308.07124#148 | 2308.07124#150 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#150 | OctoPack: Instruction Tuning Code Large Language Models | GPT-4: The function car_race_collision seems to be working fine as it is. It takes an integer â nâ as input and returns the cube of â nâ . There donâ t seem to be any bugs in this function. if the function is supposed to calculate something specific related to a car race However, collision and itâ s not doing that correctly, we would need more information about the expected behavior to fix it. Figure 31: Incorrect solution from GPT-4 to a HUMANEVALFIX Python problem. GPT-4 does not catch that the test cases expect the function to square, not cube n. OCTOCODER correctly modifies the 3 to 2 in 6 out of n=20 generations. | 2308.07124#149 | 2308.07124#151 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#151 | OctoPack: Instruction Tuning Code Large Language Models | 49 # OctoPack: Instruction Tuning Code Large Language Models # User: def rounded_avg(n, m): if m < n: return -1 summation = 0 for i in range(n, m+1): summation += i return bin(round(summation/(m - n))) def check(rounded_avg): # Check some simple cases assert rounded_avg(1, 5) == "0b11" assert rounded_avg(7, 13) == "0b1010" assert rounded_avg(964,977) == "0b1111001010" assert rounded_avg(996,997) == "0b1111100100" assert rounded_avg(560,851) == "0b1011000010" assert rounded_avg(185,546) == "0b101101110" assert rounded_avg(362,496) == "0b110101101" assert rounded_avg(350,902) == "0b1001110010" assert rounded_avg(197,233) == "0b11010111" # Check some edge cases that are easy to work out by hand. assert rounded_avg(7, 5) == -1 assert rounded_avg(5, 1) == -1 assert rounded_avg(5, 5) == "0b101" check(rounded_avg) Fix bugs in rounded_avg. def check(rounded_avg): check(rounded_avg) | 2308.07124#150 | 2308.07124#152 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#152 | OctoPack: Instruction Tuning Code Large Language Models | GPT-4: The function rounded_avg is supposed to calculate the average of all numbers between n and m (inclusive), round it, and then return the binary representation of the result. There are several bugs in the function: 1. The function should return -1 if m < n, but it should also return -1 if m == n because in this case, the denominator of the average calculation would be 0, which would cause a division by zero error. 2. The function calculates the average by summing all numbers between n and m and then dividing by (m - n). However, this is incorrect because the number of integers between n and m (inclusive) is (m - n + 1), not (m - n). 3. The function uses the bin function to convert the result to binary. However, the bin function returns a string that starts with â 0bâ to indicate that it is a binary number. If the function is supposed to return the binary representation as an integer, it should remove this prefix. Here is the corrected function: # def rounded_avg(n, m): if m <= n: return -1 summation = 0 for i in range(n, m+1): summation += i average = round(summation / (m - n + 1)) return int(bin(average)[2:]) This function now correctly calculates the average, rounds it, and returns the binary representation as an integer. If n is greater than or equal to m, it returns -1. Figure 32: Incorrect solution from GPT-4 to a HUMANEVALFIX Python problem. The bug consists of a "+1" missing after "m-n". GPT-4 correctly fixes that, however, it introduces two new bugs: Retuning "-1" when m=n and removing the "0b", both of which evidently lead to the provided unit tests failing. | 2308.07124#151 | 2308.07124#153 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#153 | OctoPack: Instruction Tuning Code Large Language Models | 50 # OctoPack: Instruction Tuning Code Large Language Models O.3 WIZARDCODER # Input: Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Fix bugs in bf. def bf(planet1, planet2): planet_names = ("Mercury", "Venus", "Earth", "Mars", "Jupyter", "Saturn", "Uranus", " Neptune") if planet1 not in planet_names or planet2 not in planet_names or planet1 == planet2: return () planet1_index = planet_names.index(planet1) planet2_index = planet_names.index(planet2) if planet1_index < planet2_index: return (planet_names[planet1_index + 1: planet2_index]) else: return (planet_names[planet2_index + 1 : planet1_index]) def check(bf): # Check some simple cases assert bf("Jupiter", "Neptune") == ("Saturn", "Uranus"), "First test error: " + str(len(bf ("Jupiter", "Neptune"))) assert bf("Earth", "Mercury") == ("Venus",), "Second test error: " + str(bf("Earth", " Mercury")) assert bf("Mercury", "Uranus") == ("Venus", "Earth", "Mars", "Jupiter", "Saturn"), "Third test error: " + str(bf("Mercury", "Uranus")) assert bf("Neptune", "Venus") == ("Earth", "Mars", "Jupiter", "Saturn", "Uranus"), "Fourth test error: " + str(bf("Neptune", "Venus")) # Check some edge cases that are easy to work out by hand. assert bf("Earth", "Earth") == () assert bf("Mars", "Earth") == () assert bf("Jupiter", "Makemake") == () check(bf) ### Response: def bf(planet1, planet2): # WizardCoder: | 2308.07124#152 | 2308.07124#154 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#154 | OctoPack: Instruction Tuning Code Large Language Models | planet_names = ("Mercury", "Venus", "Earth", "Mars", "Jupyter", "Saturn", "Uranus", " Neptune") if planet1 not in planet_names or planet2 not in planet_names or planet1 == planet2: return () planet1_index = planet_names.index(planet1) planet2_index = planet_names.index(planet2) if planet1_index < planet2_index: return (planet_names[planet1_index + 1: planet2_index]) else: return (planet_names[planet2_index + 1 : planet1_index]) Figure 33: Incorrect solution from WizardCoder to a HUMANEVALFIX Python problem. The bug consists of "Jupiter" being misspelled as "Jupyter" in the function. WizardCoder returns the same function without making a single change leaving the bug. This is a particularly difficult bug that only GPT-4 and OCTOCODER (only for 1 out of the n=20 generations) are able to solve. 51 # OctoPack: Instruction Tuning Code Large Language Models # O.4 BLOOMZ # Input: const generateIntegers = (a, b) => { if (a > b) { let tmp = a; a = b; b = tmp; } let y = [] for (let i = a; i <= b; i++) { if (i == 2 || i == 4 || i == 6 || i == 8) { y.push(i) } } return y } Provide a concise natural language description of the code using at most 218 characters. # BLOOMZ: Generate a list of integers from a to b. # Input (without previous input): Generate a list of integers from a to b. Write functional code in JavaScript according to the description. const generateIntegers = (a, b) => { # BLOOMZ: } let integers = []; for (let i = a; i <= b; i++) { integers.push(i); } return integers; Figure 34: Incorrect solution from BLOOMZ to a HUMANEVALEXPLAIN JavaScript problem. BLOOMZ misses in its description that the function generates only the even digits between a and b, in ascending order. Thus, its synthesized function based on its own description simply collects all integers between a and b. | 2308.07124#153 | 2308.07124#155 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#155 | OctoPack: Instruction Tuning Code Large Language Models | BLOOMZ likely left out this part due to its bias for short and concise answers. However, this bias backfires for this task. O.5 STARCODER 52 # OctoPack: Instruction Tuning Code Large Language Models # Input: import java.util.*; import java.lang.*; class Solution { public Map<String, Integer> histogram(String test) { Map<String, Integer> dict1 = new HashMap<>(); List<String> list1 = Arrays.asList(test.split(" " )); int t = 0; for (String i : list1) { if (Collections.frequency(list1, i) > t && !i.isEmpty()) { t = Collections.frequency(list1, i); } } if (t > 0) { for (String i : list1) { if (Collections.frequency(list1, i) == t) { dict1.put(i, t); } } } return dict1; } } Provide a concise natural language description of the code using at most 423 characters. StarCoder: public class Solution { public Map<String, Integer> histogram(String test) { int t = 0; } } if (t > 0) { Write functional code in Java according to the description. import java.util.*; import java.lang.*; class Solution { public Map<String, Integer> histogram(String test) { StarCoder: Map<String, Integer> map = new HashMap<String, Integer>(); int t = 0; for (int i = 0; i < test.length(); i++) { if (test.charAt(i) == â tâ ) { t++; } } map.put("t", t); return map; } } Figure 35: Incorrect solution from StarCoder to a HUMANEVALEXPLAIN Java problem. Star- Coder is not capable of generating natural language descriptions and instead copies the solution. However, our post-processing removes substrings from the solution (§3) leading to a useless descrip- tion. Given this description it generates an irrelevant function. As StarCoder cannot generate natural language explanations, we disqualify it from HUMANEVALEXPLAIN setting its score to 0 in Table 2. | 2308.07124#154 | 2308.07124#156 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#156 | OctoPack: Instruction Tuning Code Large Language Models | 53 # OctoPack: Instruction Tuning Code Large Language Models INSTRUCTCODET5+ # Input: Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Fix bugs in has_close_elements. from typing import List def has_close_elements(numbers: List[float], threshold: float) -> bool: for idx, elem in enumerate(numbers): for idx2, elem2 in enumerate(numbers): if idx != idx2: distance = elem - elem2 if distance < threshold: return True return False assert has_close_elements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) == True assert has_close_elements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) == False assert has_close_elements([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) == True assert has_close_elements([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) == False assert has_close_elements([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) == True assert has_close_elements([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) == True assert has_close_elements([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) == False check(has_close_elements) | 2308.07124#155 | 2308.07124#157 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#157 | OctoPack: Instruction Tuning Code Large Language Models | ### Response:from typing import List def has_close_elements(numbers: List[float], threshold: float) -> bool: InstructCodeT5+: for idx, elem in enumerate(numbers): for idx2, elem2 in enumerate(numbers): if idx != idx2: distance = elem - elem2 if distance < threshold: return True return False Figure 36: Incorrect solution from InstructCodeT5+ to a HUMANEVALFIX Python problem. InstructCodeT5+ simply repeated the function without adding the missing "abs" statement to turn the distance into an absolute value. This is one of the simplest bugs that most other models are able to fix. For InstructCodeT5+ the prompt starts directly after "Response:" without a newline (Figure 19). | 2308.07124#156 | 2308.07124#158 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#158 | OctoPack: Instruction Tuning Code Large Language Models | O.7 STARCHAT-β 54 # OctoPack: Instruction Tuning Code Large Language Models # Input: #include<stdio.h> #include<math.h> #include<vector> #include<string> #include<algorithm> using namespace std; #include<stdlib.h> int specialFilter(vector<int> nums){ int num=0; for (int i=0;i<nums.size();i++) if (nums[i]>10) { string w=to_string(nums[i]); if (w[0]%2==1 and w[w.length()-1]%2==1) num+=1; } return num; } Provide a concise natural language description of the code using at most 296 characters. | 2308.07124#157 | 2308.07124#159 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#159 | OctoPack: Instruction Tuning Code Large Language Models | StarChat-β: This code is a solution to the "Special Array" problem on LeetCode. It uses C++ and includes several standard libraries, including vector, string, and algorithm. The code starts by declaring some variables and importing the necessary libraries. It then defines a specialFilter function that takes # Input (without previous input): This code is a solution to the "Special Array" problem on LeetCode. It uses C++ and includes several standard libraries, including vector, string, and algorithm. The code starts by declaring some variables and importing the necessary libraries. It then defines a specialFilter function that takes Write functional code in C++ according to the description. #include<stdio.h> #include<math.h> #include<vector> #include<string> #include<algorithm> using namespace std; #include<stdlib.h> int specialFilter(vector<int> nums){ | 2308.07124#158 | 2308.07124#160 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#160 | OctoPack: Instruction Tuning Code Large Language Models | # StarChat-β: int n = nums.size(); int sum = 0; for(int i = 0; i < n; i++){ sum += pow(nums[i], 2); } int target = (int)sqrt(sum); vector<int> res; for(int i = 0; i < n; i++){ if(nums[i] > target){ res.push_back(nums[i]); } } return res; } Figure 37: Incorrect solution from StarChat-β to a HUMANEVALEXPLAIN C++ problem. StarChat-β has a bias for excessively long answers. StarChat-βâ s description includes irrelevant parts like "..problem on LeetCode..includes several standard libraries.." (despite the problem not existing on LeetCode). As the description exceeds the 296-character limit for the code, it is cut off at the end. | 2308.07124#159 | 2308.07124#161 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#161 | OctoPack: Instruction Tuning Code Large Language Models | 55 # OctoPack: Instruction Tuning Code Large Language Models O.8 DIFF CODEGEN # Input: <NME> truncate_number.py <BEF> def truncate_number(number: float) -> float: """ Given a positive floating point number, it can be decomposed into and integer part (largest integer smaller than given number) and decimals (leftover part always smaller than 1). Return the decimal part of the number. >>> truncate_number(3.5) 0.5 """ return number % 1.0 + 1.0 <MSG> Fix bugs in truncate_number. <DFF> | 2308.07124#160 | 2308.07124#162 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#162 | OctoPack: Instruction Tuning Code Large Language Models | # Diff Codegen 2B @@ -4,7 +4,7 @@ def truncate_number(number: float) -> float: and integer part (largest integer smaller than given number) and decimals (leftover part always smaller than 1). - + Return the decimal part of the number. Return Figure 38: Incorrect solution from Diff Codegen 2B to a HUMANEVALFIXDOCS Python problem. Diff Codegen 2B suggests an irrelevant diff modifying parts of the docstring. The model commonly outputs diffs that modify the docstring or an import statement and rarely addresses the actual bug. # P LIMITATIONS AND FUTURE WORK Model Execution A promising avenue for improving performance on HUMANEVALFIX is letting the model execute the given code or its own generated code and inspect its output (Chen et al., 2022; 2023c; Yasunaga & Liang, 2021; Li et al., 2022a; Gao et al., 2023; Dong et al., 2023; Zhang et al., 2023c; Madaan et al., 2023b; Ni et al., 2023; Gou et al., 2023; Hu et al., 2023; Taylor et al., 2022; Nye et al., 2021). This could allow the model to discover which unit tests are failing and for what reason. The model could then simply iterate on the function until all unit tests are passing. We leave explorations of this strategy to improve performance on HUMANEVALPACK to future work. Multi-file changes For the creation of COMMITPACK, we have filtered out any commits that affect multiple files to ensure commits are very specific and account for the fact that most current models are only capable of operating on a single file. Allowing models to take multiple files as input and modify multiple files given a single instruction is a promising direction for future work. | 2308.07124#161 | 2308.07124#163 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#163 | OctoPack: Instruction Tuning Code Large Language Models | There is active research on using repository-level context (Ding et al., 2022; Shrivastava et al., 2023a;b; Zhang et al., 2023a; Liu et al., 2023d) and the necessary long context windows (Dai et al., 2019; Press et al., 2021; Sun et al., 2021; Dao et al., 2022; Peng et al., 2023; Liu et al., 2023c; Chen et al., 2023b). Length-awareness Current Code LLMs including OCTOCODER struggle with awareness about the length of their generated output. For HUMANEVALEXPLAIN, we instruct the models to limit their output to a given number of characters. While it is trivial for humans to count characters and adhere to the limit, all models tested frequently generate far too many characters. Prior work has shown that human raters are biased towards preferring longer texts (Wu & Aji, 2023) regardless of content. All models evaluated are instruction tuned on text that was at least indirectly assessed by human raters, hence they may be biased towards generating longer texts even if it means including literary bloat. Better evaluation Evaluating code instruction models is challenging for several reasons: (1) Prompting: The prompt can significantly impact the performance of large language mod- | 2308.07124#162 | 2308.07124#164 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#164 | OctoPack: Instruction Tuning Code Large Language Models | 56 # OctoPack: Instruction Tuning Code Large Language Models els (Brown et al., 2020; Zhou et al., 2022; Muennighoff, 2022; Babe et al., 2023). To ensure fair evaluation we use the prompting format put forth by the respective authors of the models and a simple intuitive prompt for models without a canonical prompt (see Appendix N). However, this may put models without a canonical prompt recommendation (e.g. BLOOMZ, GPT-4) at a slight disadvantage. OCTOCODER and OCTOGEEX perform best when prompted using the same format we use during training (Figure 17) and we recommend always using this format at inference. (2) Processing: Models may accidentally impair otherwise correct code by e.g. including a natural language explanation in their output. We largely circumvent this issue through the use of strict stopping criteria and careful postprocessing (e.g. for GPT-4 we check if it has enclosed the code in backticks, and if so, extract only the inner part of the backticks discarding its explanations). (3) Execution: When executing code to compute pass@k, it is important that the generated code matches the installed programming language version. Models may inadvertently use expressions from a different version (e.g. they may use the Python 2 syntax of print "hi", which would fail in a Python 3 environment). In our evaluation, we did not find this to be a problem, however, as models become more capable, it may make sense to specify the version. Future prompts may include the version (e.g. â use JDK 1.18.0â ) or provide models with an execution environment that has the exact version installed that will be used for evaluation. (4) Comprehensiveness: Executing code can only reflect functional correctness lacking a comprehen- sive understanding of quality. Compared to execution-based evaluation, the human judgment of code quality can be considered more comprehensive as humans can consider factors beyond correctness. Directly hiring human annotators can be inefficient and expensive, and therefore researchers have explored approaches to automate human-aligned evaluation via LLMs (Fu et al., 2023; Liu et al., 2023e; Zhuo, 2023). However, recent work (Wang et al., 2023b) suggests LLM-based evaluation can be biased towards certain contexts. | 2308.07124#163 | 2308.07124#165 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#165 | OctoPack: Instruction Tuning Code Large Language Models | Future work on automating the human-aligned evaluation of instruction tuned Code LLMs while avoiding such bias is needed. Reward Models Our commit datasets, COMMITPACK and COMMITPACKFT, also lend themselves well for learning human preferences. The changed code after a commit generally represents a human- preferred version of the code (else the code would not have been modified). Thus, one could train a reward model that given the code before and after a commit, learns that the code afterward is better. Similar to prior work (Ouyang et al., 2022), this reward model could then be used to guide a language model to generate code that is preferred by humans. | 2308.07124#164 | 2308.07124#166 | 2308.07124 | [
"2302.00288"
]
|
2308.07124#166 | OctoPack: Instruction Tuning Code Large Language Models | # Q OCTOBADPACK Figure 39: OCTOPACK (left) and her evil brother OCTOBADPACK (right). 57 | 2308.07124#165 | 2308.07124 | [
"2302.00288"
]
|
|
2308.07107#0 | Large Language Models for Information Retrieval: A Survey | 4 2 0 2 n a J 9 1 ] L C . s c [ 3 v 7 0 1 7 0 . 8 0 3 2 : v i X r a # Large Language Models for Information Retrieval: A Survey Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zhicheng Dou, and Ji-Rong Wen Abstractâ | 2308.07107#1 | 2308.07107 | [
"2305.03195"
]
|
|
2308.07107#1 | Large Language Models for Information Retrieval: A Survey | As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions, such as search agents, within this expanding field. Index Termsâ Large Language Models; Information Retrieval; Query Rewrite; Rerank; Reader; Fine-tuning; Prompting | 2308.07107#0 | 2308.07107#2 | 2308.07107 | [
"2305.03195"
]
|
2308.07107#2 | Large Language Models for Information Retrieval: A Survey | â ¦ # 1 INTRODUCTION needs of human beings. To fulfill the need for rapid acquisition of desired information, various information re- trieval (IR) systems have been developed [1â 4]. Prominent examples include search engines such as Google, Bing, and Baidu, which serve as IR systems on the Internet, adept at retrieving relevant web pages in response to user queries, and provide convenient and efficient access to information on the Internet. It is worth noting that IR extends beyond web page retrieval. In dialogue systems (chatbots) [1, 5â 8], such as Microsoft Xiaoice [2], Apple Siri,1 and Google Assistant,2 IR systems play a crucial role in retrieving appro- priate responses to user input utterances, thereby producing natural and fluent human-machine conversations. Similarly, in question-answering systems [3, 9], IR systems are em- ployed to select relevant clues essential for addressing user questions effectively. In image search engines [4], IR systems excel at returning images that align with user input queries. Given the exponential growth of information, research and industry have become increasingly interested in the devel- opment of effective IR systems. The core function of an IR system is retrieval, which aims to determine the relevance between a user-issued query and the content to be retrieved, including various types of information such as texts, images, music, and more. | 2308.07107#1 | 2308.07107#3 | 2308.07107 | [
"2305.03195"
]
|
2308.07107#3 | Large Language Models for Information Retrieval: A Survey | For the scope of this survey, we concentrate solely on review- ing those text retrieval systems, in which query-document relevance is commonly measured by their matching score.3 Given that IR systems operate on extensive repositories, the efficiency of retrieval algorithms becomes of paramount importance. To improve the user experience, the retrieval performance is enhanced from both the upstream (query reformulation) and downstream (reranking and reading) perspectives. As an upstream technique, query reformu- lation is designed to refine user queries so that they are more effective at retrieving relevant documents [10, 11]. With the recent surge in the popularity of conversational search, this technique has received increasing attention. On the downstream side, reranking approaches are developed to further adjust the document ranking [12â | 2308.07107#2 | 2308.07107#4 | 2308.07107 | [
"2305.03195"
]
|
2308.07107#4 | Large Language Models for Information Retrieval: A Survey | 14]. In contrast to the retrieval stage, reranking is performed only on a limited set of relevant documents, already retrieved by the retriever. Under this circumstance, the emphasis is placed on achieving higher performance rather than keeping higher efficiency, allowing for the application of more complex ap- proaches in the reranking process. Additionally, reranking can accommodate other specific requirements, such as per- sonalization [15â 18] and diversification [19â 22]. Following the retrieval and reranking stages, a reading component is incorporated to summarize the retrieved documents and de- liver a concise document to users [23, 24]. While traditional IR systems typically require users to gather and organize relevant information themselves; however, the reading com- ponent is an integral part of new IR systems such as New All authors are from Gaoling School of Artificial Intelligence and School of Information, Renmin University of China. Contact e-mail: [email protected], [email protected] | 2308.07107#3 | 2308.07107#5 | 2308.07107 | [
"2305.03195"
]
|
2308.07107#5 | Large Language Models for Information Retrieval: A Survey | 1. Apple Siri, https://www.apple.com/siri/ 2. Google Assistant, https://assistant.google.com/ 3. The term â documentâ will henceforth refer to any text-based con- tent subject to retrieve, including both long articles and short passages. 1 Tradition al IR Components Q New Query Search Context Documents Candidate Selected Documents Query; Rewriter Retriever | | Reranker Response, Q Query, Response, : Large Language Models â @ Response Q Query, ChatGPT QOQLLaMA GF lan-T5 ©)GLM (BLOOM (2) Search Agent | 2308.07107#4 | 2308.07107#6 | 2308.07107 | [
"2305.03195"
]
|
2308.07107#6 | Large Language Models for Information Retrieval: A Survey | Fig. 1. Overview of existing studies that apply LLMs into IR. (1) LLMs can be used to enhance traditional IR components, such as query rewriter, retriever, reranker, and reader. (2) LLMs can also be used as search agents to perform multiple IR tasks. Bing,4 streamlining usersâ browsing experience and saving valuable time. The trajectory of IR has traversed a dynamic evolution, transitioning from its origins in term-based methods to the integration of neural models. Initially, IR was anchored in term-based methods [25] and Boolean logic, focusing on keyword matching for document retrieval. The paradigm gradually shifted with the introduction of vector space mod- els [26], unlocking the potential to capture nuanced semantic relationships between terms. This progression continued with statistical language models [27, 28], refining relevance estimation through contextual and probabilistic considera- tions. The influential BM25 algorithm [29] played an im- portant role during this phase, revolutionizing relevance ranking by accounting for term frequency and document length variations. | 2308.07107#5 | 2308.07107#7 | 2308.07107 | [
"2305.03195"
]
|
2308.07107#7 | Large Language Models for Information Retrieval: A Survey | The most recent chapter in IRâ s journey is marked by the ascendancy of neural models [3, 30â 32]. These models excel at capturing intricate contextual cues and semantic nuances, reshaping the landscape of IR. However, these neural models still face challenges such as data scarcity, interpretability, and the potential generation of plausible yet inaccurate responses. Thus, the evolution of IR continues to be a journey of balancing traditional strengths (such as the BM25 algorithmâ s high efficiency) with the remarkable capability (such as semantic understanding) brought about by modern neural architectures. Large language models (LLMs) have recently emerged as transformative forces across various research fields, such as natural language processing (NLP) [33â 35], recommender systems [36â 39], finance [40], and even molecule discov- ery [41]. | 2308.07107#6 | 2308.07107#8 | 2308.07107 | [
"2305.03195"
]
|
2308.07107#8 | Large Language Models for Information Retrieval: A Survey | These cutting-edge LLMs are primarily based on the Transformer architecture and undergo extensive pre- training on diverse textual sources, including web pages, research articles, books, and codes. As their scale contin- ues to expand (including both model size and data vol- ume), LLMs have demonstrated remarkable advances in their capabilities. On the one hand, LLMs have exhibited unprecedented proficiency in language understanding and generation, resulting in responses that are more human-like and better align with human intentions. On the other hand, the larger LLMs have shown impressive emergent abilities | 2308.07107#7 | 2308.07107#9 | 2308.07107 | [
"2305.03195"
]
|
2308.07107#9 | Large Language Models for Information Retrieval: A Survey | 4. New Bing, https://www.bing.com/new when dealing with complex tasks [42], such as general- ization and reasoning skills. Notably, LLMs can effectively apply their learned knowledge and reasoning abilities to tackle new tasks with just a few task-specific demonstrations or appropriate instructions [43, 44]. Furthermore, advanced techniques, such as in-context learning, have significantly enhanced the generalization performance of LLMs without requiring fine-tuning on specific downstream tasks [34]. This breakthrough is particularly valuable, as it reduces the need for extensive fine-tuning while attaining remarkable task performance. Powered by prompting strategies such as chain-of-thought, LLMs can generate outputs with step-by- step reasoning, navigating complex decision-making pro- cesses [45]. Leveraging the impressive power of LLMs can undoubtedly improve the performance of IR systems. By incorporating these sophisticated language models, IR systems can provide users with more accurate responses, ultimately reshaping the landscape of information access and retrieval. Initial efforts have been made to utilize the potential of LLMs in the development of novel IR systems. Notably, in terms of practical applications, New Bing is designed to improve the usersâ experience of using search engines by extracting information from disparate web pages and con- densing it into concise summaries that serve as responses to user-generated queries. In the research community, LLMs have proven useful within specific modules of IR systems (such as retrievers), thereby enhancing the overall perfor- mance of these systems. Due to the rapid evolution of LLM- enhanced IR systems, it is essential to comprehensively review their most recent advancements and challenges. Our survey provides an insightful exploration of the in- tersection between LLMs and IR systems, covering key per- spectives such as query rewriters, retrievers, rerankers, and readers (as shown in Figure 1).5 We also include some recent studies that leverage LLMs as search agents to perform various IR tasks. This analysis enhances our understanding of LLMsâ potential and limitations in advancing the IR field. | 2308.07107#8 | 2308.07107#10 | 2308.07107 | [
"2305.03195"
]
|
2308.07107#10 | Large Language Models for Information Retrieval: A Survey | 5. As yet, there has not been a formal definition for LLMs. In this pa- per, we mainly focus on models with more than 1B parameters. We also notice that some methods do not rely on such strictly defined LLMs, but due to their representativeness, we still include an introduction to them in this survey. 2 For this survey, we create a Github repository by collecting the relevant papers and resources about LLM4IR.6 We will continue to update the repository with newer papers. This survey will also be periodically updated according to the development of this area. We notice that there are several surveys for PLMs, LLMs, and their applications (e.g., AIGC or recommender systems) [46â | 2308.07107#9 | 2308.07107#11 | 2308.07107 | [
"2305.03195"
]
|
2308.07107#11 | Large Language Models for Information Retrieval: A Survey | 52]. Among these, we highly recommend the survey of LLMs [52], which provides a systematic and comprehensive reference to many important aspects of LLMs. Compared with them, we focus on the techniques and methods for developing and applying LLMs for IR systems. In addition, we notice a perspective paper discussing the opportunity of IR when meeting LLMs [53]. It would be an excellent supplement to this survey regarding future directions. The remaining part of this survey is organized as fol- lows: Section 2 introduces the background for IR and LLMs. Section 3, 4, 5, 6 respectively review recent progress from the four perspectives of query rewriter, retriever, reranker, and reader, which are four key components of an IR system. Then, Section 8 discusses some potential directions in future research. Finally, we conclude the survey in Section 9 by summarizing the major findings. # 2 BACKGROUND 2.1 Information Retrieval Information retrieval (IR), as an essential branch of com- puter science, aims to efficiently retrieve information rel- evant to user queries from a large repository. Generally, users interact with the system by submitting their queries in textual form. Subsequently, IR systems undertake the task of matching and ranking these user-supplied queries against an indexed database, thereby facilitating the retrieval of the most pertinent results. The field of IR has witnessed significant advancement with the emergence of various models over time. One such early model is the Boolean model, which employs Boolean logic operators to combine query terms and retrieve doc- uments that satisfy specific conditions [25]. | 2308.07107#10 | 2308.07107#12 | 2308.07107 | [
"2305.03195"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.