doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.10512
| 42 |
...Since k and b have the same sign, and k is negative, we can conclude that b is also negative. Therefore, ... So the correct answer is A: First quadrant. : Third quadrant D: Fourth quadrant yokx+b R4: The correct answer is option C: Third quadrant. ... the only option left is the » | second quadrant. Therefore, ...does not pass through the third quadrant. R5: The correct answer is option C: Third quadrant. ...a straight line with a negative slope... Therefore, the line must pass through either the second or third quadrant. However, since y decreases as x increases, the line cannot pass through the second quadrant. Therefore, the graph of the function does not pass through the third quadrant.
|
2306.10512#42
|
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
|
Large language models (LLMs), like ChatGPT, have shown some human-like
cognitive abilities. For comparing these abilities of different models, several
benchmarks (i.e. sets of standard test questions) from different fields (e.g.,
Literature, Biology and Psychology) are often adopted and the test results
under traditional metrics such as accuracy, recall and F1, are reported.
However, such way for evaluating LLMs can be inefficient and inaccurate from
the cognitive science perspective. Inspired by Computerized Adaptive Testing
(CAT) used in psychometrics, we propose an adaptive testing framework for LLM
evaluation. Rather than using a standard test set and simply reporting
accuracy, this approach dynamically adjusts the characteristics of the test
questions, such as difficulty, based on the model's performance. This allows
for a more accurate estimation of the model's abilities, using fewer questions.
More importantly, it allows LLMs to be compared with humans easily, which is
essential for NLP models that aim for human-level ability. Our diagnostic
reports have found that ChatGPT often behaves like a ``careless student'',
prone to slip and occasionally guessing the questions. We conduct a
fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three
aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where
GPT4 can outperform other models significantly and reach the cognitive ability
of middle-level students. Different tests for different models using efficient
adaptive testing -- we believe this has the potential to become a new norm in
evaluating large language models.
|
http://arxiv.org/pdf/2306.10512
|
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen
|
cs.CL
| null | null |
cs.CL
|
20230618
|
20231028
|
[
{
"id": "2305.02201"
},
{
"id": "2302.06476"
},
{
"id": "2304.06588"
},
{
"id": "2301.12867"
},
{
"id": "2303.12093"
},
{
"id": "2302.05717"
},
{
"id": "2004.08410"
}
] |
2306.10512
| 43 |
Figure 8: The illustration of ChatGPTâs âFickle-Mindedâ characteristic: it answers the same question 5 times, and gives 4 different answers (only R3 is correct).
that the label has a 10% probability of changing from 1 to 0. Interestingly, ChatGPTâs SE curve is very close to the student SE curve of Guess=10%, Slip=30% (red). From this, we can deduce that ChatGPT behaves like a âcareless studentâ who is prone to slip (30%) and occasionally guesses the answers (10%).
|
2306.10512#43
|
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
|
Large language models (LLMs), like ChatGPT, have shown some human-like
cognitive abilities. For comparing these abilities of different models, several
benchmarks (i.e. sets of standard test questions) from different fields (e.g.,
Literature, Biology and Psychology) are often adopted and the test results
under traditional metrics such as accuracy, recall and F1, are reported.
However, such way for evaluating LLMs can be inefficient and inaccurate from
the cognitive science perspective. Inspired by Computerized Adaptive Testing
(CAT) used in psychometrics, we propose an adaptive testing framework for LLM
evaluation. Rather than using a standard test set and simply reporting
accuracy, this approach dynamically adjusts the characteristics of the test
questions, such as difficulty, based on the model's performance. This allows
for a more accurate estimation of the model's abilities, using fewer questions.
More importantly, it allows LLMs to be compared with humans easily, which is
essential for NLP models that aim for human-level ability. Our diagnostic
reports have found that ChatGPT often behaves like a ``careless student'',
prone to slip and occasionally guessing the questions. We conduct a
fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three
aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where
GPT4 can outperform other models significantly and reach the cognitive ability
of middle-level students. Different tests for different models using efficient
adaptive testing -- we believe this has the potential to become a new norm in
evaluating large language models.
|
http://arxiv.org/pdf/2306.10512
|
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen
|
cs.CL
| null | null |
cs.CL
|
20230618
|
20231028
|
[
{
"id": "2305.02201"
},
{
"id": "2302.06476"
},
{
"id": "2304.06588"
},
{
"id": "2301.12867"
},
{
"id": "2303.12093"
},
{
"id": "2302.05717"
},
{
"id": "2004.08410"
}
] |
2306.10512
| 44 |
ChatGPT is âFickle-Mindedâ In the testing of ChatGPT, we discover that if it is asked to answer the same question repeatedly, it often produces varied and even contrasting responses. Figure 8 illustrates that it provides four different answers to the same question asked five times in different sessions. This âinconsistencyâ is largely due to ChatGPT being a probability-based generative model; while this ensures each response is diverse and creative, it also introduces potential issues. As a result, this inconsistency creates noise/uncertainty during the test. We also investigate the impact of the temperature parameter, which controls the level of randomness or creativity in the text generated by ChatGPT [9]. Figure 7(b) shows that as the temperature increases, so does the uncertainty of ChatGPTâs answers. Therefore, when asking the ChatGPT to solve rigorous problems (such as mathematics), a lower temperature parameter is preferred.
# 4.2 Comparison of Different LLMs
In addition to ChatGPT, we also use the above CAT method to compare the cognitive level of other models (Table 2). More importantly, in order to intuitively compare the abilities with humans, we also show the ability estimates of high-ability (Top 20%) and middle-ability (Top 50%) students, where CODIA and MOOC are college students, and MATH is high school students.
|
2306.10512#44
|
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
|
Large language models (LLMs), like ChatGPT, have shown some human-like
cognitive abilities. For comparing these abilities of different models, several
benchmarks (i.e. sets of standard test questions) from different fields (e.g.,
Literature, Biology and Psychology) are often adopted and the test results
under traditional metrics such as accuracy, recall and F1, are reported.
However, such way for evaluating LLMs can be inefficient and inaccurate from
the cognitive science perspective. Inspired by Computerized Adaptive Testing
(CAT) used in psychometrics, we propose an adaptive testing framework for LLM
evaluation. Rather than using a standard test set and simply reporting
accuracy, this approach dynamically adjusts the characteristics of the test
questions, such as difficulty, based on the model's performance. This allows
for a more accurate estimation of the model's abilities, using fewer questions.
More importantly, it allows LLMs to be compared with humans easily, which is
essential for NLP models that aim for human-level ability. Our diagnostic
reports have found that ChatGPT often behaves like a ``careless student'',
prone to slip and occasionally guessing the questions. We conduct a
fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three
aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where
GPT4 can outperform other models significantly and reach the cognitive ability
of middle-level students. Different tests for different models using efficient
adaptive testing -- we believe this has the potential to become a new norm in
evaluating large language models.
|
http://arxiv.org/pdf/2306.10512
|
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen
|
cs.CL
| null | null |
cs.CL
|
20230618
|
20231028
|
[
{
"id": "2305.02201"
},
{
"id": "2302.06476"
},
{
"id": "2304.06588"
},
{
"id": "2301.12867"
},
{
"id": "2303.12093"
},
{
"id": "2302.05717"
},
{
"id": "2004.08410"
}
] |
2306.10512
| 45 |
GPT4 is the Best. GPT4 is significantly higher than other LLMs in terms of mathematical reasoning, programming, and subject knowledge level. In particular, the subject level of GPT4 surpasses high- ability college students (Top 20%) in almost every knowledge concept. A large amount of knowledge can be âstoredâ with its massive training data and unprecedented model parameters, which is one of the reasons why other language models cannot beat it.
Each LLM has its own strengths. For example, for programming level (CODIA), GPT4 is good at Dynamic Programming and Math Problem, and ChatGPT is good at Search Problem. Although Sparkâs average programming ability is lower than that of GPT4, using programming to solve mathematical problems is its forte. Therefore, although many LLMs have not announced the specific details of the data used, we have reason to infer that e.g., ChatGPT/GPT4 uses more coding-related data, and Spark uses more mathematics-related data in the training stage.
11
Table 2: Estimated value (Ëθ) for students and each model. The boldfaced indicates the highest ability value among these LLMs. The underline â__â indicates that the model surpasses mid-ability students (Top 50%). â*â indicates this model surpasses high-ability students (Top 20%).
|
2306.10512#45
|
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
|
Large language models (LLMs), like ChatGPT, have shown some human-like
cognitive abilities. For comparing these abilities of different models, several
benchmarks (i.e. sets of standard test questions) from different fields (e.g.,
Literature, Biology and Psychology) are often adopted and the test results
under traditional metrics such as accuracy, recall and F1, are reported.
However, such way for evaluating LLMs can be inefficient and inaccurate from
the cognitive science perspective. Inspired by Computerized Adaptive Testing
(CAT) used in psychometrics, we propose an adaptive testing framework for LLM
evaluation. Rather than using a standard test set and simply reporting
accuracy, this approach dynamically adjusts the characteristics of the test
questions, such as difficulty, based on the model's performance. This allows
for a more accurate estimation of the model's abilities, using fewer questions.
More importantly, it allows LLMs to be compared with humans easily, which is
essential for NLP models that aim for human-level ability. Our diagnostic
reports have found that ChatGPT often behaves like a ``careless student'',
prone to slip and occasionally guessing the questions. We conduct a
fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three
aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where
GPT4 can outperform other models significantly and reach the cognitive ability
of middle-level students. Different tests for different models using efficient
adaptive testing -- we believe this has the potential to become a new norm in
evaluating large language models.
|
http://arxiv.org/pdf/2306.10512
|
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen
|
cs.CL
| null | null |
cs.CL
|
20230618
|
20231028
|
[
{
"id": "2305.02201"
},
{
"id": "2302.06476"
},
{
"id": "2304.06588"
},
{
"id": "2301.12867"
},
{
"id": "2303.12093"
},
{
"id": "2302.05717"
},
{
"id": "2004.08410"
}
] |
2306.10512
| 46 |
Instruction Tuned LLMs Student Top Bard ChatGPT GPT4 ERNIEBOT QianWen Spark 20% Equations and Inequalities Function Permutation and Combination Geometry Average Rank *0.77 0.59 0.49 0.58 0.35 0.56 0.55 0.55 0.57 0.36 0.55 0.36 0.56 0.12 0.56 0.22 0.32 0.56 High-Ability > GPT4 â Mid-Ability > Spark > Bard > ERNIEBOT > ChatGPT > QianWen 0.44 0.14 0.48 0.03 0.01 0.22 0.46 0.14 0.26 0.25 0.36 0.29 0.37 0.14 0.14 0.13 0.24 0.21 *0.66 0.37 0.58 0.57 0.25 0.49 0.65 0.66 0.65 0.65 0.66 0.65 Dynamic Programming *0.83 0.40 *0.84 0.51 0.49 0.61 0.63 0.34 0.58 0.37 0.58 0.46 0.61 0.23 0.54 0.00 0.28 0.59 High-Ability > GPT4 > Mid-Ability > ChatGPT >
|
2306.10512#46
|
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
|
Large language models (LLMs), like ChatGPT, have shown some human-like
cognitive abilities. For comparing these abilities of different models, several
benchmarks (i.e. sets of standard test questions) from different fields (e.g.,
Literature, Biology and Psychology) are often adopted and the test results
under traditional metrics such as accuracy, recall and F1, are reported.
However, such way for evaluating LLMs can be inefficient and inaccurate from
the cognitive science perspective. Inspired by Computerized Adaptive Testing
(CAT) used in psychometrics, we propose an adaptive testing framework for LLM
evaluation. Rather than using a standard test set and simply reporting
accuracy, this approach dynamically adjusts the characteristics of the test
questions, such as difficulty, based on the model's performance. This allows
for a more accurate estimation of the model's abilities, using fewer questions.
More importantly, it allows LLMs to be compared with humans easily, which is
essential for NLP models that aim for human-level ability. Our diagnostic
reports have found that ChatGPT often behaves like a ``careless student'',
prone to slip and occasionally guessing the questions. We conduct a
fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three
aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where
GPT4 can outperform other models significantly and reach the cognitive ability
of middle-level students. Different tests for different models using efficient
adaptive testing -- we believe this has the potential to become a new norm in
evaluating large language models.
|
http://arxiv.org/pdf/2306.10512
|
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen
|
cs.CL
| null | null |
cs.CL
|
20230618
|
20231028
|
[
{
"id": "2305.02201"
},
{
"id": "2302.06476"
},
{
"id": "2304.06588"
},
{
"id": "2301.12867"
},
{
"id": "2303.12093"
},
{
"id": "2302.05717"
},
{
"id": "2004.08410"
}
] |
2306.10512
| 47 |
0.58 0.46 0.61 0.23 0.54 0.00 0.28 0.59 High-Ability > GPT4 > Mid-Ability > ChatGPT > Spark > ERNIEBOT > QianWen > Bard *0.72 0.40 0.60 *0.73 0.38 0.57 0.42 0.29 0.39 0.41 0.27 0.35 0.42 0.29 0.39 0.41 0.34 0.37 0.40 0.29 0.60 0.41 0.27 0.40 0.70 0.67 0.66 0.70 0.63 0.67 Programming Language *0.80 0.63 0.48 *0.78 0.66 0.68 0.60 0.66 *1.00 0.60 0.78 0.60 GPT4 > Bard > ChatGPT â High-Ability > Mid-Ability > Spark > QianWen > ERNIEBOT 0.57 *0.67 0.70 0.67 *0.79 0.68 *0.78 *0.99 *0.82 0.66 *0.77 0.80 0.26 *0.77 0.49 0.23 0.34 0.42 0.47 *0.88 0.38 0.03 0.46
|
2306.10512#47
|
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
|
Large language models (LLMs), like ChatGPT, have shown some human-like
cognitive abilities. For comparing these abilities of different models, several
benchmarks (i.e. sets of standard test questions) from different fields (e.g.,
Literature, Biology and Psychology) are often adopted and the test results
under traditional metrics such as accuracy, recall and F1, are reported.
However, such way for evaluating LLMs can be inefficient and inaccurate from
the cognitive science perspective. Inspired by Computerized Adaptive Testing
(CAT) used in psychometrics, we propose an adaptive testing framework for LLM
evaluation. Rather than using a standard test set and simply reporting
accuracy, this approach dynamically adjusts the characteristics of the test
questions, such as difficulty, based on the model's performance. This allows
for a more accurate estimation of the model's abilities, using fewer questions.
More importantly, it allows LLMs to be compared with humans easily, which is
essential for NLP models that aim for human-level ability. Our diagnostic
reports have found that ChatGPT often behaves like a ``careless student'',
prone to slip and occasionally guessing the questions. We conduct a
fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three
aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where
GPT4 can outperform other models significantly and reach the cognitive ability
of middle-level students. Different tests for different models using efficient
adaptive testing -- we believe this has the potential to become a new norm in
evaluating large language models.
|
http://arxiv.org/pdf/2306.10512
|
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen
|
cs.CL
| null | null |
cs.CL
|
20230618
|
20231028
|
[
{
"id": "2305.02201"
},
{
"id": "2302.06476"
},
{
"id": "2304.06588"
},
{
"id": "2301.12867"
},
{
"id": "2303.12093"
},
{
"id": "2302.05717"
},
{
"id": "2004.08410"
}
] |
2306.10512
| 49 |
Mathematical reasoning of LLM still has a long way to go. Mathematical reasoning ability is an important aspect for evaluating LLMs. Unfortunately, according to the estimated ability output by CAT, even the well-performing GPT4 and Spark models are only equivalent to mid-ability high school students. After all, the essence of LLM is still the sequence-to-sequence generative model based on probability instead of thinking and reasoning like humans. Transformer obviously is not enough to imitate human cognitive structure or process. Therefore, problem-solving based on cognition/reasoning [48, 49, 50, 51] is still lacking in LLMs.
# 5 Conclusion
More and more users are trying to explore LLMâs abilities in different aspects, and even ask it to do some things that ânormalâ NLP models cannot do, such as generating code, making PowerPoint, and writing emails. Thus, how to scientifically and efficiently evaluate its ability is of significant importance. In this paper, we propose a general adaptive testing framework inspired by assessing humans: Computerized Adaptive Testing (CAT). With its high efficiency, fewer questions are required under the same evaluation accuracy, which greatly reduces the labor cost and computation overhead. In the future, we will explore the impact of different prompts on its ability estimation, and further estimate the ability of more LLMs.
# References
[1] OpenAI. Gpt-4 technical report, 2023.
|
2306.10512#49
|
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
|
Large language models (LLMs), like ChatGPT, have shown some human-like
cognitive abilities. For comparing these abilities of different models, several
benchmarks (i.e. sets of standard test questions) from different fields (e.g.,
Literature, Biology and Psychology) are often adopted and the test results
under traditional metrics such as accuracy, recall and F1, are reported.
However, such way for evaluating LLMs can be inefficient and inaccurate from
the cognitive science perspective. Inspired by Computerized Adaptive Testing
(CAT) used in psychometrics, we propose an adaptive testing framework for LLM
evaluation. Rather than using a standard test set and simply reporting
accuracy, this approach dynamically adjusts the characteristics of the test
questions, such as difficulty, based on the model's performance. This allows
for a more accurate estimation of the model's abilities, using fewer questions.
More importantly, it allows LLMs to be compared with humans easily, which is
essential for NLP models that aim for human-level ability. Our diagnostic
reports have found that ChatGPT often behaves like a ``careless student'',
prone to slip and occasionally guessing the questions. We conduct a
fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three
aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where
GPT4 can outperform other models significantly and reach the cognitive ability
of middle-level students. Different tests for different models using efficient
adaptive testing -- we believe this has the potential to become a new norm in
evaluating large language models.
|
http://arxiv.org/pdf/2306.10512
|
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen
|
cs.CL
| null | null |
cs.CL
|
20230618
|
20231028
|
[
{
"id": "2305.02201"
},
{
"id": "2302.06476"
},
{
"id": "2304.06588"
},
{
"id": "2301.12867"
},
{
"id": "2303.12093"
},
{
"id": "2302.05717"
},
{
"id": "2004.08410"
}
] |
2306.10512
| 50 |
# References
[1] OpenAI. Gpt-4 technical report, 2023.
[2] Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and Diyi Yang. Is chatgpt a general-purpose natural language processing task solver? arXiv preprint arXiv:2302.06476, 2023.
[3] Patryk Orzechowski and Jason H Moore. Generative and reproducible benchmarks for com- prehensive evaluation of machine learning classifiers. Science Advances, 8(47):eabl4747, 2022.
12
[4] Chris Drummond and Nathalie Japkowicz. Warning: statistical benchmarking is addictive. kick- ing the habit in machine learning. Journal of Experimental & Theoretical Artificial Intelligence, 22(1):67â80, 2010.
[5] José Hernández-Orallo, Bao Sheng Loe, Lucy Cheke, Fernando MartÃnez-Plumed, and Seán à hÃigeartaigh. General intelligence disentangled via a generality metric for natural and artificial intelligence. Scientific reports, 11(1):22822, 2021.
|
2306.10512#50
|
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
|
Large language models (LLMs), like ChatGPT, have shown some human-like
cognitive abilities. For comparing these abilities of different models, several
benchmarks (i.e. sets of standard test questions) from different fields (e.g.,
Literature, Biology and Psychology) are often adopted and the test results
under traditional metrics such as accuracy, recall and F1, are reported.
However, such way for evaluating LLMs can be inefficient and inaccurate from
the cognitive science perspective. Inspired by Computerized Adaptive Testing
(CAT) used in psychometrics, we propose an adaptive testing framework for LLM
evaluation. Rather than using a standard test set and simply reporting
accuracy, this approach dynamically adjusts the characteristics of the test
questions, such as difficulty, based on the model's performance. This allows
for a more accurate estimation of the model's abilities, using fewer questions.
More importantly, it allows LLMs to be compared with humans easily, which is
essential for NLP models that aim for human-level ability. Our diagnostic
reports have found that ChatGPT often behaves like a ``careless student'',
prone to slip and occasionally guessing the questions. We conduct a
fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three
aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where
GPT4 can outperform other models significantly and reach the cognitive ability
of middle-level students. Different tests for different models using efficient
adaptive testing -- we believe this has the potential to become a new norm in
evaluating large language models.
|
http://arxiv.org/pdf/2306.10512
|
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen
|
cs.CL
| null | null |
cs.CL
|
20230618
|
20231028
|
[
{
"id": "2305.02201"
},
{
"id": "2302.06476"
},
{
"id": "2304.06588"
},
{
"id": "2301.12867"
},
{
"id": "2303.12093"
},
{
"id": "2302.05717"
},
{
"id": "2004.08410"
}
] |
2306.10512
| 51 |
[6] Lingyao Li, Zihui Ma, Lizhou Fan, Sanggyu Lee, Huizi Yu, and Libby Hemphill. Chatgpt in education: A discourse analysis of worries and concerns on social media. arXiv preprint arXiv:2305.02201, 2023.
[7] Petter Törnberg. Chatgpt-4 outperforms experts and crowd workers in annotating political twitter messages with zero-shot learning. arXiv preprint arXiv:2304.06588, 2023.
[8] Yuxiang Zhao and Qinghua Zhu. Evaluation on crowdsourcing research: Current status and future direction. Information systems frontiers, 16:417â434, 2014.
[9] OpenAI. Overview - openai api, 2023. https://platform.openai.com/overview. [10] Wim J Linden, Wim J van der Linden, and Cees AW Glas. Computerized adaptive testing:
Theory and practice. Springer, 2000.
[11] Susan E Embretson and Steven P Reise. Item response theory. Psychology Press, 2013. [12] Frederic M Lord. Applications of item response theory to practical testing problems. Routledge,
2012.
|
2306.10512#51
|
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
|
Large language models (LLMs), like ChatGPT, have shown some human-like
cognitive abilities. For comparing these abilities of different models, several
benchmarks (i.e. sets of standard test questions) from different fields (e.g.,
Literature, Biology and Psychology) are often adopted and the test results
under traditional metrics such as accuracy, recall and F1, are reported.
However, such way for evaluating LLMs can be inefficient and inaccurate from
the cognitive science perspective. Inspired by Computerized Adaptive Testing
(CAT) used in psychometrics, we propose an adaptive testing framework for LLM
evaluation. Rather than using a standard test set and simply reporting
accuracy, this approach dynamically adjusts the characteristics of the test
questions, such as difficulty, based on the model's performance. This allows
for a more accurate estimation of the model's abilities, using fewer questions.
More importantly, it allows LLMs to be compared with humans easily, which is
essential for NLP models that aim for human-level ability. Our diagnostic
reports have found that ChatGPT often behaves like a ``careless student'',
prone to slip and occasionally guessing the questions. We conduct a
fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three
aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where
GPT4 can outperform other models significantly and reach the cognitive ability
of middle-level students. Different tests for different models using efficient
adaptive testing -- we believe this has the potential to become a new norm in
evaluating large language models.
|
http://arxiv.org/pdf/2306.10512
|
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen
|
cs.CL
| null | null |
cs.CL
|
20230618
|
20231028
|
[
{
"id": "2305.02201"
},
{
"id": "2302.06476"
},
{
"id": "2304.06588"
},
{
"id": "2301.12867"
},
{
"id": "2303.12093"
},
{
"id": "2302.05717"
},
{
"id": "2004.08410"
}
] |
2306.10512
| 52 |
2012.
[13] Hua-Hua Chang and Zhiliang Ying. A global information approach to computerized adaptive testing. Applied Psychological Measurement, 20(3):213â229, 1996.
[14] Haoyang Bi, Haiping Ma, Zhenya Huang, Yu Yin, Qi Liu, Enhong Chen, Yu Su, and Shijin Wang. Quality meets diversity: A model-agnostic framework for computerized adaptive testing. In 2020 IEEE International Conference on Data Mining (ICDM), pages 42â51. IEEE, 2020.
[15] Andrew S Lan, Andrew E Waters, Christoph Studer, and Richard G Baraniuk. Sparse factor analysis for learning and content analytics. Journal of Machine Learning Research (JMLR), 2014.
[16] Jill-Jênn Vie, Fabrice Popineau, Ãric Bruillard, and Yolaine Bourda. A review of recent advances in adaptive assessment. Learning analytics: fundaments, applications, and trends, pages 113â142, 2017.
[17] Terry Yue Zhuo, Yujin Huang, Chunyang Chen, and Zhenchang Xing. Exploring ai ethics of chatgpt: A diagnostic analysis. arXiv preprint arXiv:2301.12867, 2023.
|
2306.10512#52
|
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
|
Large language models (LLMs), like ChatGPT, have shown some human-like
cognitive abilities. For comparing these abilities of different models, several
benchmarks (i.e. sets of standard test questions) from different fields (e.g.,
Literature, Biology and Psychology) are often adopted and the test results
under traditional metrics such as accuracy, recall and F1, are reported.
However, such way for evaluating LLMs can be inefficient and inaccurate from
the cognitive science perspective. Inspired by Computerized Adaptive Testing
(CAT) used in psychometrics, we propose an adaptive testing framework for LLM
evaluation. Rather than using a standard test set and simply reporting
accuracy, this approach dynamically adjusts the characteristics of the test
questions, such as difficulty, based on the model's performance. This allows
for a more accurate estimation of the model's abilities, using fewer questions.
More importantly, it allows LLMs to be compared with humans easily, which is
essential for NLP models that aim for human-level ability. Our diagnostic
reports have found that ChatGPT often behaves like a ``careless student'',
prone to slip and occasionally guessing the questions. We conduct a
fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three
aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where
GPT4 can outperform other models significantly and reach the cognitive ability
of middle-level students. Different tests for different models using efficient
adaptive testing -- we believe this has the potential to become a new norm in
evaluating large language models.
|
http://arxiv.org/pdf/2306.10512
|
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen
|
cs.CL
| null | null |
cs.CL
|
20230618
|
20231028
|
[
{
"id": "2305.02201"
},
{
"id": "2302.06476"
},
{
"id": "2304.06588"
},
{
"id": "2301.12867"
},
{
"id": "2303.12093"
},
{
"id": "2302.05717"
},
{
"id": "2004.08410"
}
] |
2306.10512
| 53 |
[18] Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models, 2023.
[19] Hua-Hua Chang. Psychometrics behind computerized adaptive testing. Psychometrika, 80(1):1â 20, 2015.
[20] Diane Ravitch. National standards in American education: A citizenâs guide. ERIC, 1995. [21] Wynne Harlen. The Assessment of Scientific Literacy in the OECD/PISA Project, pages 49â60.
Springer Netherlands, Dordrecht, 2001.
[22] Fei Wang, Qi Liu, Enhong Chen, Zhenya Huang, Yuying Chen, Yu Yin, Zai Huang, and Shijin Wang. Neural cognitive diagnosis for intelligent education systems. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 6153â6161, 2020.
|
2306.10512#53
|
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
|
Large language models (LLMs), like ChatGPT, have shown some human-like
cognitive abilities. For comparing these abilities of different models, several
benchmarks (i.e. sets of standard test questions) from different fields (e.g.,
Literature, Biology and Psychology) are often adopted and the test results
under traditional metrics such as accuracy, recall and F1, are reported.
However, such way for evaluating LLMs can be inefficient and inaccurate from
the cognitive science perspective. Inspired by Computerized Adaptive Testing
(CAT) used in psychometrics, we propose an adaptive testing framework for LLM
evaluation. Rather than using a standard test set and simply reporting
accuracy, this approach dynamically adjusts the characteristics of the test
questions, such as difficulty, based on the model's performance. This allows
for a more accurate estimation of the model's abilities, using fewer questions.
More importantly, it allows LLMs to be compared with humans easily, which is
essential for NLP models that aim for human-level ability. Our diagnostic
reports have found that ChatGPT often behaves like a ``careless student'',
prone to slip and occasionally guessing the questions. We conduct a
fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three
aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where
GPT4 can outperform other models significantly and reach the cognitive ability
of middle-level students. Different tests for different models using efficient
adaptive testing -- we believe this has the potential to become a new norm in
evaluating large language models.
|
http://arxiv.org/pdf/2306.10512
|
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen
|
cs.CL
| null | null |
cs.CL
|
20230618
|
20231028
|
[
{
"id": "2305.02201"
},
{
"id": "2302.06476"
},
{
"id": "2304.06588"
},
{
"id": "2301.12867"
},
{
"id": "2303.12093"
},
{
"id": "2302.05717"
},
{
"id": "2004.08410"
}
] |
2306.10512
| 54 |
[23] Xinping Wang, Caidie Huang, Jinfang Cai, and Liangyu Chen. Using knowledge concept aggregation towards accurate cognitive diagnosis. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 2010â2019, 2021.
[24] Weibo Gao, Qi Liu, Zhenya Huang, Yu Yin, Haoyang Bi, Mu-Chun Wang, Jianhui Ma, Shijin Wang, and Yu Su. Rcd: Relation map driven cognitive diagnosis for intelligent education systems. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 501â510, 2021.
13
[25] Pedro Rodriguez, Joe Barrow, Alexander Miserlis Hoyle, John P Lalor, Robin Jia, and Jordan Boyd-Graber. Evaluation examples are not equally informative: How should that change nlp leaderboards? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4486â4503, 2021.
|
2306.10512#54
|
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
|
Large language models (LLMs), like ChatGPT, have shown some human-like
cognitive abilities. For comparing these abilities of different models, several
benchmarks (i.e. sets of standard test questions) from different fields (e.g.,
Literature, Biology and Psychology) are often adopted and the test results
under traditional metrics such as accuracy, recall and F1, are reported.
However, such way for evaluating LLMs can be inefficient and inaccurate from
the cognitive science perspective. Inspired by Computerized Adaptive Testing
(CAT) used in psychometrics, we propose an adaptive testing framework for LLM
evaluation. Rather than using a standard test set and simply reporting
accuracy, this approach dynamically adjusts the characteristics of the test
questions, such as difficulty, based on the model's performance. This allows
for a more accurate estimation of the model's abilities, using fewer questions.
More importantly, it allows LLMs to be compared with humans easily, which is
essential for NLP models that aim for human-level ability. Our diagnostic
reports have found that ChatGPT often behaves like a ``careless student'',
prone to slip and occasionally guessing the questions. We conduct a
fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three
aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where
GPT4 can outperform other models significantly and reach the cognitive ability
of middle-level students. Different tests for different models using efficient
adaptive testing -- we believe this has the potential to become a new norm in
evaluating large language models.
|
http://arxiv.org/pdf/2306.10512
|
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen
|
cs.CL
| null | null |
cs.CL
|
20230618
|
20231028
|
[
{
"id": "2305.02201"
},
{
"id": "2302.06476"
},
{
"id": "2304.06588"
},
{
"id": "2301.12867"
},
{
"id": "2303.12093"
},
{
"id": "2302.05717"
},
{
"id": "2004.08410"
}
] |
2306.10512
| 55 |
[26] John P Lalor, Hao Wu, and Hong Yu. Building an evaluation scale using item response theory. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing, volume 2016, page 648. NIH Public Access, 2016.
[27] João Sedoc and Lyle Ungar. Item response theory for efficient human evaluation of chatbots. In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, pages 21â33, 2020.
[28] Mark Hopkins and Jonathan May. Models of translation competitions. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1416â1424, 2013.
[29] Naoki Otani, Toshiaki Nakazawa, Daisuke Kawahara, and Sadao Kurohashi. Irt-based aggre- gation model of crowdsourced pairwise comparison for evaluating machine translations. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 511â520, 2016.
[30] Giles Hooker, Matthew Finkelman, and Armin Schwartzman. Paradoxical results in multidi- mensional item response theory. Psychometrika, 74(3):419â442, 2009.
|
2306.10512#55
|
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
|
Large language models (LLMs), like ChatGPT, have shown some human-like
cognitive abilities. For comparing these abilities of different models, several
benchmarks (i.e. sets of standard test questions) from different fields (e.g.,
Literature, Biology and Psychology) are often adopted and the test results
under traditional metrics such as accuracy, recall and F1, are reported.
However, such way for evaluating LLMs can be inefficient and inaccurate from
the cognitive science perspective. Inspired by Computerized Adaptive Testing
(CAT) used in psychometrics, we propose an adaptive testing framework for LLM
evaluation. Rather than using a standard test set and simply reporting
accuracy, this approach dynamically adjusts the characteristics of the test
questions, such as difficulty, based on the model's performance. This allows
for a more accurate estimation of the model's abilities, using fewer questions.
More importantly, it allows LLMs to be compared with humans easily, which is
essential for NLP models that aim for human-level ability. Our diagnostic
reports have found that ChatGPT often behaves like a ``careless student'',
prone to slip and occasionally guessing the questions. We conduct a
fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three
aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where
GPT4 can outperform other models significantly and reach the cognitive ability
of middle-level students. Different tests for different models using efficient
adaptive testing -- we believe this has the potential to become a new norm in
evaluating large language models.
|
http://arxiv.org/pdf/2306.10512
|
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen
|
cs.CL
| null | null |
cs.CL
|
20230618
|
20231028
|
[
{
"id": "2305.02201"
},
{
"id": "2302.06476"
},
{
"id": "2304.06588"
},
{
"id": "2301.12867"
},
{
"id": "2303.12093"
},
{
"id": "2302.05717"
},
{
"id": "2004.08410"
}
] |
2306.10512
| 56 |
[31] Lawrence M Rudner. An examination of decision-theory adaptive testing procedures. In annual meeting of the American Educational Research Association, 2002.
[32] Wim J van der Linden. Bayesian item selection criteria for adaptive testing. Psychometrika, 63(2):201â216, 1998.
[33] Yan Zhuang, Qi Liu, Zhenya Huang, Zhi Li, Binbin Jin, Haoyang Bi, Enhong Chen, and Shijin Wang. A robust computerized adaptive testing approach in educational question retrieval. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR â22, page 416â426, New York, NY, USA, 2022. Association for Computing Machinery.
[34] Darkhan Nurakhmetov. Reinforcement learning applied to adaptive classification testing. In Theoretical and Practical Advances in Computer-based Educational Measurement, pages 325â336. Springer, Cham, 2019.
[35] Xiao Li, Hanchen Xu, Jinming Zhang, and Hua-hua Chang. Deep reinforcement learning for adaptive learning systems. arXiv preprint arXiv:2004.08410, 2020.
|
2306.10512#56
|
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
|
Large language models (LLMs), like ChatGPT, have shown some human-like
cognitive abilities. For comparing these abilities of different models, several
benchmarks (i.e. sets of standard test questions) from different fields (e.g.,
Literature, Biology and Psychology) are often adopted and the test results
under traditional metrics such as accuracy, recall and F1, are reported.
However, such way for evaluating LLMs can be inefficient and inaccurate from
the cognitive science perspective. Inspired by Computerized Adaptive Testing
(CAT) used in psychometrics, we propose an adaptive testing framework for LLM
evaluation. Rather than using a standard test set and simply reporting
accuracy, this approach dynamically adjusts the characteristics of the test
questions, such as difficulty, based on the model's performance. This allows
for a more accurate estimation of the model's abilities, using fewer questions.
More importantly, it allows LLMs to be compared with humans easily, which is
essential for NLP models that aim for human-level ability. Our diagnostic
reports have found that ChatGPT often behaves like a ``careless student'',
prone to slip and occasionally guessing the questions. We conduct a
fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three
aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where
GPT4 can outperform other models significantly and reach the cognitive ability
of middle-level students. Different tests for different models using efficient
adaptive testing -- we believe this has the potential to become a new norm in
evaluating large language models.
|
http://arxiv.org/pdf/2306.10512
|
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen
|
cs.CL
| null | null |
cs.CL
|
20230618
|
20231028
|
[
{
"id": "2305.02201"
},
{
"id": "2302.06476"
},
{
"id": "2304.06588"
},
{
"id": "2301.12867"
},
{
"id": "2303.12093"
},
{
"id": "2302.05717"
},
{
"id": "2004.08410"
}
] |
2306.10512
| 57 |
[36] Aritra Ghosh and Andrew Lan. Bobcat: Bilevel optimization-based computerized adaptive testing. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelli- gence, IJCAI-21, pages 2410â2417. International Joint Conferences on Artificial Intelligence Organization, 8 2021.
[37] Yan Zhuang, Qi Liu, Zhenya Huang, Zhi Li, Shuanghong Shen, and Haiping Ma. Fully adaptive framework: Neural computerized adaptive testing for online education. Proceedings of the AAAI Conference on Artificial Intelligence, 36(4):4734â4742, Jun. 2022.
[38] Tianyou Wang and Walter P Vispoel. Properties of ability estimation methods in computerized adaptive testing. Journal of Educational Measurement, 35(2):109â135, 1998.
|
2306.10512#57
|
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
|
Large language models (LLMs), like ChatGPT, have shown some human-like
cognitive abilities. For comparing these abilities of different models, several
benchmarks (i.e. sets of standard test questions) from different fields (e.g.,
Literature, Biology and Psychology) are often adopted and the test results
under traditional metrics such as accuracy, recall and F1, are reported.
However, such way for evaluating LLMs can be inefficient and inaccurate from
the cognitive science perspective. Inspired by Computerized Adaptive Testing
(CAT) used in psychometrics, we propose an adaptive testing framework for LLM
evaluation. Rather than using a standard test set and simply reporting
accuracy, this approach dynamically adjusts the characteristics of the test
questions, such as difficulty, based on the model's performance. This allows
for a more accurate estimation of the model's abilities, using fewer questions.
More importantly, it allows LLMs to be compared with humans easily, which is
essential for NLP models that aim for human-level ability. Our diagnostic
reports have found that ChatGPT often behaves like a ``careless student'',
prone to slip and occasionally guessing the questions. We conduct a
fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three
aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where
GPT4 can outperform other models significantly and reach the cognitive ability
of middle-level students. Different tests for different models using efficient
adaptive testing -- we believe this has the potential to become a new norm in
evaluating large language models.
|
http://arxiv.org/pdf/2306.10512
|
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen
|
cs.CL
| null | null |
cs.CL
|
20230618
|
20231028
|
[
{
"id": "2305.02201"
},
{
"id": "2302.06476"
},
{
"id": "2304.06588"
},
{
"id": "2301.12867"
},
{
"id": "2303.12093"
},
{
"id": "2302.05717"
},
{
"id": "2004.08410"
}
] |
2306.10512
| 58 |
[38] Tianyou Wang and Walter P Vispoel. Properties of ability estimation methods in computerized adaptive testing. Journal of Educational Measurement, 35(2):109â135, 1998.
[39] Sheldon M Ross. A first course in probability. Pearson, 2014. [40] Bradley Efron and David V Hinkley. Assessing the accuracy of the maximum likelihood estimator: Observed versus expected fisher information. Biometrika, 65(3):457â483, 1978. [41] Chun Wang and Hua-Hua Chang. Item selection in multidimensional computerized adaptive testingâgaining information from different angles. Psychometrika, 76:363â384, 2011. [42] C. Wang, D. J. Weiss, and Z. Shang. Variable-length stopping rules for multidimensional
computerized adaptive testing. Psychometrika, 2018.
[43] Ali Kashefi and Tapan Mukerji. Chatgpt for programming numerical methods. arXiv preprint arXiv:2303.12093, 2023.
14
[44] Som Biswas. Role of chatgpt in computer programming.: Chatgpt in computer programming. Mesopotamian Journal of Computer Science, 2023:8â16, 2023.
|
2306.10512#58
|
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
|
Large language models (LLMs), like ChatGPT, have shown some human-like
cognitive abilities. For comparing these abilities of different models, several
benchmarks (i.e. sets of standard test questions) from different fields (e.g.,
Literature, Biology and Psychology) are often adopted and the test results
under traditional metrics such as accuracy, recall and F1, are reported.
However, such way for evaluating LLMs can be inefficient and inaccurate from
the cognitive science perspective. Inspired by Computerized Adaptive Testing
(CAT) used in psychometrics, we propose an adaptive testing framework for LLM
evaluation. Rather than using a standard test set and simply reporting
accuracy, this approach dynamically adjusts the characteristics of the test
questions, such as difficulty, based on the model's performance. This allows
for a more accurate estimation of the model's abilities, using fewer questions.
More importantly, it allows LLMs to be compared with humans easily, which is
essential for NLP models that aim for human-level ability. Our diagnostic
reports have found that ChatGPT often behaves like a ``careless student'',
prone to slip and occasionally guessing the questions. We conduct a
fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three
aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where
GPT4 can outperform other models significantly and reach the cognitive ability
of middle-level students. Different tests for different models using efficient
adaptive testing -- we believe this has the potential to become a new norm in
evaluating large language models.
|
http://arxiv.org/pdf/2306.10512
|
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen
|
cs.CL
| null | null |
cs.CL
|
20230618
|
20231028
|
[
{
"id": "2305.02201"
},
{
"id": "2302.06476"
},
{
"id": "2304.06588"
},
{
"id": "2301.12867"
},
{
"id": "2303.12093"
},
{
"id": "2302.05717"
},
{
"id": "2004.08410"
}
] |
2306.10512
| 59 |
[45] Xin Lin, Zhenya Huang, Hongke Zhao, Enhong Chen, Qi Liu, Defu Lian, Xin Li, and Hao Wang. Learning relation-enhanced hierarchical solver for math word problems. IEEE Transactions on Neural Networks and Learning Systems, pages 1â15, 2023.
[46] Seung W Choi, Matthew W Grady, and Barbara G Dodd. A new stopping rule for computerized adaptive testing. Educational and Psychological Measurement, 71(1):37â53, 2011.
[47] Wim J Van der Linden and Cees AW Glas. Elements of adaptive testing, volume 10. Springer, 2010.
[48] Jiayu Liu, Zhenya Huang, Chengxiang Zhai, and Qi Liu. Learning by applying: A general framework for mathematical reasoning via enhancing explicit knowledge learning. arXiv preprint arXiv:2302.05717, 2023.
[49] Ming Ding, Chang Zhou, Qibin Chen, Hongxia Yang, and Jie Tang. Cognitive graph for multi-hop reading comprehension at scale. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2694â2703, 2019.
|
2306.10512#59
|
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
|
Large language models (LLMs), like ChatGPT, have shown some human-like
cognitive abilities. For comparing these abilities of different models, several
benchmarks (i.e. sets of standard test questions) from different fields (e.g.,
Literature, Biology and Psychology) are often adopted and the test results
under traditional metrics such as accuracy, recall and F1, are reported.
However, such way for evaluating LLMs can be inefficient and inaccurate from
the cognitive science perspective. Inspired by Computerized Adaptive Testing
(CAT) used in psychometrics, we propose an adaptive testing framework for LLM
evaluation. Rather than using a standard test set and simply reporting
accuracy, this approach dynamically adjusts the characteristics of the test
questions, such as difficulty, based on the model's performance. This allows
for a more accurate estimation of the model's abilities, using fewer questions.
More importantly, it allows LLMs to be compared with humans easily, which is
essential for NLP models that aim for human-level ability. Our diagnostic
reports have found that ChatGPT often behaves like a ``careless student'',
prone to slip and occasionally guessing the questions. We conduct a
fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three
aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where
GPT4 can outperform other models significantly and reach the cognitive ability
of middle-level students. Different tests for different models using efficient
adaptive testing -- we believe this has the potential to become a new norm in
evaluating large language models.
|
http://arxiv.org/pdf/2306.10512
|
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen
|
cs.CL
| null | null |
cs.CL
|
20230618
|
20231028
|
[
{
"id": "2305.02201"
},
{
"id": "2302.06476"
},
{
"id": "2304.06588"
},
{
"id": "2301.12867"
},
{
"id": "2303.12093"
},
{
"id": "2302.05717"
},
{
"id": "2004.08410"
}
] |
2306.10512
| 60 |
[50] Xin Lin, Zhenya Huang, Hongke Zhao, Enhong Chen, Qi Liu, Hao Wang, and Shijin Wang. Hms: A hierarchical solver with dependency-enhanced understanding for math word problem. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 4232â4240, 2021.
[51] Jiayu Liu, Zhenya Huang, Xin Lin, Qi Liu, Jianhui Ma, and Enhong Chen. A cognitive solver with autonomously knowledge learning for reasoning mathematical answers. In 2022 IEEE International Conference on Data Mining (ICDM), pages 269â278. IEEE, 2022.
15
|
2306.10512#60
|
Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
|
Large language models (LLMs), like ChatGPT, have shown some human-like
cognitive abilities. For comparing these abilities of different models, several
benchmarks (i.e. sets of standard test questions) from different fields (e.g.,
Literature, Biology and Psychology) are often adopted and the test results
under traditional metrics such as accuracy, recall and F1, are reported.
However, such way for evaluating LLMs can be inefficient and inaccurate from
the cognitive science perspective. Inspired by Computerized Adaptive Testing
(CAT) used in psychometrics, we propose an adaptive testing framework for LLM
evaluation. Rather than using a standard test set and simply reporting
accuracy, this approach dynamically adjusts the characteristics of the test
questions, such as difficulty, based on the model's performance. This allows
for a more accurate estimation of the model's abilities, using fewer questions.
More importantly, it allows LLMs to be compared with humans easily, which is
essential for NLP models that aim for human-level ability. Our diagnostic
reports have found that ChatGPT often behaves like a ``careless student'',
prone to slip and occasionally guessing the questions. We conduct a
fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three
aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where
GPT4 can outperform other models significantly and reach the cognitive ability
of middle-level students. Different tests for different models using efficient
adaptive testing -- we believe this has the potential to become a new norm in
evaluating large language models.
|
http://arxiv.org/pdf/2306.10512
|
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen
|
cs.CL
| null | null |
cs.CL
|
20230618
|
20231028
|
[
{
"id": "2305.02201"
},
{
"id": "2302.06476"
},
{
"id": "2304.06588"
},
{
"id": "2301.12867"
},
{
"id": "2303.12093"
},
{
"id": "2302.05717"
},
{
"id": "2004.08410"
}
] |
2306.09896
| 1 |
# ABSTRACT
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repairâin which the model debugs and fixes mistakes in its own codeâhas recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4âs ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the modelâs ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
# INTRODUCTION
|
2306.09896#1
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 2 |
# INTRODUCTION
Large language models (LLMs) have proven capable of generating code snippets from natural language specifications, but still struggle on complex coding tasks such as those found in professional software engineering interviews. Recent work has sought to improve performance by leveraging self-repair (Gupta et al., 2020; Le et al., 2022; Chen et al., 2023b; Zhang et al., 2023), in which the model introspects and corrects mistakes in its own code. A typical workflow is shown in Figure 1. First, a program is sampled from the code generation model; this program is then executed on a suite of unit tests provided as part of the specification; if the program fails, then the error message and the faulty program are given to a feedback generation model, which outputs a short explanation of why the code failed; finally, the feedback is passed to a repair model, which generates a fixed version of the program.1 On the surface, this self-repair workflow is a very attractive idea. It allows the system to overcome mistakes caused by unfortunate samples during decoding; easily incorporates feedback during the repair phase from symbolic systems such as compilers, static analysis tools, and execution engines; and mimics the trial-and-error way in which human software engineers write code.
|
2306.09896#2
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 3 |
However, it is important to remember that self-repair requires more invocations of the model, thus increasing the computational cost. Whether this is a winning strategy or not ultimately boils down to whether you wouldâat an equivalent compute budgetâhave had a greater chance of success if you had simply drawn more code samples i.i.d. from the model and checked them against the suite of unit tests provided as part of the task. Crucially, the efficacy of self-repair depends not only on the modelâs ability to generate code, which has been studied extensively in the literature, but also on its ability to identify how the code (generated by the model itself) is wrong with respect to the task specification. As far as we are aware, no previous or contemporary work has attempted to study the effect of this stage in detail.
âCorrespondence to [email protected]. Work partially done while T.X.O. was at Microsoft Research. 1In practice, generating feedback and producing the corrected code can be done through a single interaction with the model; as we will see, it can still be useful to conceptually treat them as separate steps.
1
Preprint. Under review.
|
2306.09896#3
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 4 |
1
Preprint. Under review.
def f(s): Given is a string s representing the day of the week today. s is one of SUN, MON, TUE, WED, THU, FRI, or SAT. After how many days is the next Sunday (tomorrow or later)? return (7 - ['SUN', ... , 'FRI', 'SAT'].index(s)) % 7 Given input âSUNâ, the program returned 0, but the expected output was 7. (1) # UNIT TESTS # (EXECUTABLE) assert f('MON') == 6 assert f('WED') == 4 assert f('SUN') == 7 The code does not account for the case where the input is âSUNâ and the output should be 7. This can be fixed by removing the modulo operation. def f(s): return (7 - ['SUN', ... , 'FRI', 'SAT'].index(s)) # % 7 (2) (3) (4) (5)
|
2306.09896#4
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 5 |
Figure 1: Self-repair with separate code and feedback models. First, a user gives a textual specification and a suite of unit tests (1). Then, a code model (blue) generates a program (2). The program is checked against the unit tests, and an error message is returned (3). In order to strengthen the signal to the code model, textual feedback as to why this happened is generated by a feedback model (yellow; 4). Finally, this feedback is used by the code model to repair the initial program (5).
Contributions: In this paper, we investigate the efficacy of self-repair techniques applied to CodeLlama-13b-instruct (Rozière et al., 2023), GPT-3.5 (Ouyang et al., 2022; OpenAI, 2022), and GPT-4 (OpenAI, 2023), with a specific emphasis on their capacity to reflect upon and debug their own code. We observe that:
|
2306.09896#5
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 6 |
⢠Self-repair is not a silver bullet: when the cost of repair is taken into account, we find several instances in which pass rates are higher or equally high with i.i.d. sampling (without repair), including Code Llama on HumanEval and GPT-3.5 on APPS for almost all sample budgets. We conjecture that this is because program generation and repair rates trend together, and many subtle factors influence which one will overpower the other for a given task (see Appendix B).
⢠Self-repair is more likely to be beneficial when more of the sampling budget is spent on generating a diverse set of initial programs than on carrying out extensive repair. For example, for GPT-4 on APPS, drawing 10 samples up front and then 1 repair candidate each (up to 20 samples total) leads to a pass rate which is 5% higher than pass@20 from the same model without repair; drawing 2 samples up front and then drawing 10 repair candidates each (up to 22 samples total) leads to a pass rate which is 3% lower than the baseline pass@22.
|
2306.09896#6
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 7 |
⢠Artificially boosting the quality of the feedback significantly improves the efficacy of self-repair: we observe this both when replacing Code Llamaâs feedback with that produced by GPT-3.5 and when replacing GPT-3.5âs feedback with that of GPT-4, with both configurations beating out their corresponding i.i.d. sampling baselines at all budgets. Furthermore, replacing GPT-4âs own explanations with those of a human programmer improves repair significantly, leading to a 57% increase in the number of repaired programs which pass the tests.
# 2 RELATED WORK
|
2306.09896#7
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 8 |
# 2 RELATED WORK
Program synthesis with large language models. The use of large language models for program synthesis has been studied extensively in the literature (Li et al., 2022; Austin et al., 2021; Chen et al., 2021; Le et al., 2022; Fried et al., 2023; Nijkamp et al., 2023; Chowdhery et al., 2022; Touvron et al., 2023; Li et al., 2023). This literature has predominantly focused on evaluating models in terms of either raw accuracy or the pass@k metric (Kulal et al., 2019; Chen et al., 2021), often leveraging filtering techniques based on execution (Li et al., 2022; Shi et al., 2022) or ranking (Chen et al., 2021; Inala et al., 2022; Zhang et al., 2022) to reduce the number of samples which are considered for the final answer. Our work differs from some of the work in this literature in that we assume access to the full collection of input-output examples, as is typically done in inductive synthesis (Kitzelmann, 2010; Polozov & Gulwani, 2015; Gulwani et al., 2017; Chen et al., 2019a; Ellis et al., 2021). In particular,
2
# Preprint. Under review.
|
2306.09896#8
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 9 |
2
# Preprint. Under review.
unlike some prior work (Li et al., 2022; Shi et al., 2022), we do not make a distinction between public tests used for filtering and private tests used to determine correctness, since our method does not involve filtering the outputs.
Code repair. Statistical and learning-based techniques for code repair have a rich history in both the programming languages and machine learning communities, although they have traditionally been used predominantly to repair human-written code (Long & Rinard, 2016; Bader et al., 2019; Le Goues et al., 2021; Yasunaga & Liang, 2021; Chen et al., 2019b; Mesbah et al., 2019; Wang et al., 2018). More recently, using repair as a post-processing step to improve code which was itself automatically synthesised has been used in the synthesis of both domain-specific languages (Gupta et al., 2020) and general-purpose code (Le et al., 2022; Yasunaga & Liang, 2021; 2020). Our contribution differs from most prior work in this literature in the use of textual feedback for repair, which is possible thanks to the above mentioned rise in the use of LLMs for program synthesis.
|
2306.09896#9
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 10 |
Contemporary work on LLM self-repair. There is much contemporary work seeking to self-repair with LLMs, both in code generation and beyond, so we now highlight a few papers which are particularly close to our setting; see Pan et al. (2023) for a more complete survey of recent work in this quickly evolving field. Zhang et al. (2023) explore self-repair without natural language feedback on APPS (Hendrycks et al., 2021) using both finetuned models and prompt-based self-repair with Codex (Chen et al., 2021), InCoder (Fried et al., 2023), and CodeGen (Nijkamp et al., 2023). Notably, their framework does not consider the cost associated with feedback and repair, which presents a significantly different perspective. Similarly, Chen et al. (2023b) assess Codexâs ability to self-repair across a variety of tasks, in a framework that closely resembles that which we study in this work. However, their study differs from ours in terms of the models considered and, more importantly, the research goal, as we specifically aim to investigate the significance of the textual feedback stage. Outside of code generation, self-repair has been used for a wide array of
|
2306.09896#10
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 11 |
the research goal, as we specifically aim to investigate the significance of the textual feedback stage. Outside of code generation, self-repair has been used for a wide array of purposes, including mitigating hallucinations and improving factual grounding in search assistants (Peng et al., 2023) as well as code optimization and readability improvements (Madaan et al., 2023). Ultimately, we see our work, in which we investigate the significance of the textual feedback stage in particular, as being complementary to contemporary research which seeks to evaluate self-repair in a broader context; we are eager to see what the implications of our results will be in these other domains.
|
2306.09896#11
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 12 |
3 METHODOLOGY
3.1 SELF-REPAIR OVERVIEW
As shown in Figure 1, we model self-repair as consisting of four stages: code generation, code execution, feedback generation, and code repair. We now formally define these different stages.
Code generation. Given a specification Ï, a programming model MP first generates np samples i.i.d., which we denote
{pi}np i=1 i.i.d.â¼ MP (Ï)
Code execution. These np code samples are then executed against a test bed.2. If any sample p passes all of the testsâwhich we denote p |= Ïâwe stop, since a satisfying program has then been found. Otherwise, we collect the error messages {ei}i returned by the execution environment. These error messages either contain the compile/runtime error information or an example input on which the programâs output differs from the expected one. An example is shown in Figure 1 (component 3).
Feedback generation. Error messages from the execution environment are usually very high-level, providing little signal for repair. Therefore, as an intermediate step, we use a feedback model to produce a more detailed explanation of what went wrong; Figure 1 (component 4) shows an example. Formally, in this stage, we generate nf feedback strings, {fij}j, for each wrong program, pi, as follows:
|
2306.09896#12
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 13 |
{fij}nf j=1 i.i.d.â¼ MF (Ï; pi; ei)
Having an explicit feedback generation step allows us to ablate this component so that we can study its significance in isolation.
2We assume access to the full set of tests in executable form; see Section 5 for a brief discussion on the validity of this assumption in software engineering domains.
3
Preprint. Under review.
\ / Code Gen x Â¥ 3 g eo >) > âââ ~ 3 â g Gu ) (mn) Gam) aA âA... _I\~ IN q BH é â @ (rai ye (rum, ) (rin) Jane i) a) a) ) oo Fon) NL NY \Y NY NL Na W
Figure 2: A repair tree begins with a specification Ï (root node), then grows into initial programs, feedback, and repairs.
Code repair. In the final step, for each initial program pi and feedback fij, nr candidate repaired programs are sampled from MP
{rijk}nr k=1 i.i.d.â¼ MP (Ï; pi; ei; fij)
|
2306.09896#13
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 14 |
{rijk}nr k=1 i.i.d.â¼ MP (Ï; pi; ei; fij)
Repair tree. We call the tree of interleaved text and programs produced by this procedureârooted in the specification Ï, then branching into initial programs pi, each of which branches into feedback fij and then repairs rijkâa repair tree, T (Figure 2).
Jointly sampling feedback and repair. The general framework presented above does not require the programming model and feedback model to be the same, thus allowing for the use of specialized models in the system. When MP = MF , we jointly generate both the feedback and the repaired program in a single sample from the model; see Appendix F for a detailed look at how the prompt differs between this and the previous setting. Formally, we denote this as
{(fij, rij)}nf r j=1 i.i.d.â¼ MP (Ï; pi; ei)
P A S S@K FOR SELF-REPAIR
|
2306.09896#14
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 15 |
P A S S@K FOR SELF-REPAIR
In program synthesis without self-repair, performance is typically measured by pass@k (Chen et al., 2021; Kulal et al., 2019)âthe probability that at least one of k i.i.d. program samples from the model satisfies a given specification. In self-repair, program samples are drawn from the model both during the initial sample stage and during the repair stage; thus, we need to adopt pass@k to take into account the number of samples from both stages.
|
2306.09896#15
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 16 |
In the main body of this work, we treat repair trees T as themselves forming independent samples from a joint model T â¼ M = (MP ⦠MF ⦠MP ) and define the number of programs in the tree as |programs(T )| â np + npnf r (or |programs(T )| â np + npnf nr); we then compare against a baseline with k = |programs(T )| i.i.d. samples. We believe this will make our findings most relevant to practitioners, who are likely to deploy self-repairing agents with batched sampling. Appendix A repeats our experiments with two alternative evaluation strategies, in which we vary the search strategy and measure sampling cost by the total number of tokens sampled from the model to take into account the varying lengths of feedback and program samples. Importantly, although the details differ, the overall trends which we observe remain the same.
|
2306.09896#16
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 17 |
Independently generating a large amount of repair trees for each setting of the hyper-parameters quickly becomes computationally infeasible, so we plot bootstrapped estimates of the pass rates in our experiments. We first generate a single very large repair tree for each task specification, with: Np ⥠np initial program samples; Nf ⥠nf feedback strings per wrong program; and Nr ⥠nr repair candidates per feedback string. Given a setting of (np, nf , nr), we then sub-sample (with replacement) Nt different sub-repair-trees from this frozen dataset and average over the runs. We use Np = 50 for all experiments, and consider np ⤠25 for the self-repair approaches and np ⤠50 for the baseline, no-repair approach. Similarly, for the feedback strings, we use Nf = 25 and
3We use the same model for both the initial code generation and the code repair, since these are fundamentally similar tasks.
4
Preprint. Under review.
|
2306.09896#17
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 18 |
3We use the same model for both the initial code generation and the code repair, since these are fundamentally similar tasks.
4
Preprint. Under review.
nf ⤠10 (except for Section 4.2, in which we only consider nf = 1 and therefore settle for Nf = 10 instead). For the repair candidates, since we do joint sampling of feedback and repair in most of our experiments, we set Nr = nr = 1. Finally, we use Nt = 1000 for all settings. Estimating the pass rates in this way greatly reduces the computational cost of our experiments, since we can reuse the same initial dataset to compute the estimates for all of the various choices of np, nf , and nr.
# 4 EXPERIMENTS
In this section, we carry out experiments to answer the following research questions: (a) In the context of Python programming puzzles, is self-repair better than i.i.d. sampling without repair for the models we consider? If so, under what hyper-parameters is self-repair most effective? (b) Would a stronger feedback model boost the modelâs repair performance? (c) Would keeping a human in the loop to provide feedback unlock better repair performance even for the strongest model?
|
2306.09896#18
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 19 |
We evaluate these hypotheses for two API-served modelsâGPT-3.5 (Ouyang et al., 2022; OpenAI, 2022) and GPT-44 (OpenAI, 2023)âas well as CodeLlama-13b-instruct5 (Rozière et al., 2023), a model with publicly accessible weights which can be run locally on consumer-level hardware. We consider Python programming challenges from both APPS (Hendrycks et al., 2021) and HumanEval (Chen et al., 2021); for each dataset we restrict our attention to one model with stronger baseline performance (GPT-3.5 on HumanEval, GPT-4 on APPS) and one model with weaker baseline performance (Code LLama on HumanEval, GPT-3.5 on APPS). For APPS, in order to keep our experiments tractable, we evaluate on a randomly chosen set of 300 tasks.6 We implement self-repair using templated string concatenation with one-shot prompting; our prompts are given in Appendix F. Based on preliminary experiments, we set the decoding temperature to 0.8 for all models. When appropriate, we compare against a baseline without repair. This baseline,
|
2306.09896#19
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 21 |
4.1 SELF-REPAIR IS NOT A SILVER BULLET, BUT IMPROVES WITH DIVERSE INITIAL SAMPLES
In this subsection, we consider the setup where MP = MF , i.e., a true self-repair setting in which a single model is used for both code/repair generation and feedback generation. To evaluate if self-repair leads to better performance than a no-repair, i.i.d. sampling-based baseline approach, we vary np and nf râthat is, the number of initial i.i.d. base samples and joint feedback, repair samples drawn from MP âin the range (np, nf r) â {1, 2, 5, 10, 25} Ã {1, 3, 5, 10}.7
|
2306.09896#21
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 22 |
Figure 4 shows the results for Code LLama and GPT-3.5 on HumanEval, while Figure 3 shows the results for GPT-3.5 and GPT-4 on the more challenging APPS dataset. In the left-hand subplots, the color of each dot indicates the number of initial samples (np), while its shape indicates the number of feedback-repair samples (nf r). In the right hand plots, we show a heat-map with the two hyper-parameters along the axes, where the value in each cell indicates the mean pass rate with self-repair normalized by the mean pass rate of the baseline, no-repair approach when given the same budget. When the normalized mean pass rate is 1, this means that self-repair achieves the same pass rate as the no-repair, baseline approach at that same sample budget; a higher value (⥠1) means self-repair performs better than the baseline.
|
2306.09896#22
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 23 |
For APPS, we observe marginal gains for GPT-3.5 only for the largest values of np. GPT-4, on the other hand, shows more significant improvements, beating out the baseline by up to 8%. Meanwhile, on HumanEval we do not observe any performance gains for the weaker model (CodeLlama-13b- instruct), while the stronger model (GPT-3.5) shows some marginal gains of up to 3% increase relative to the baseline. From these observations, it is clear that self-repair is not uniformly better than a non-repair strategy, especially when the sample budget is low.
4We use the frozen endpoints gpt-3.5-turbo-0301 and gpt-4-0314. 5https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf 6These tasks are proportionally sampled in accordance with the frequency of the different difficulty levels in the broader APPS test set: 180 interview-level questions, 60 competition-level questions, and 60 introductory- level questions. All tasks are listed in Appendix G.
7Recall that when MP = MF , we jointly sample for nf r pairs of feedback strings and repair programs instead of sampling them one after another (Section 3.1).
5
Preprint. Under review.
|
2306.09896#23
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 24 |
5
Preprint. Under review.
~ B 0.89 0.0.B. 0.0.B. 0.0.B. = a = 0.90 | 0.96 0.0.B. 0.0.B. a ey : x & s F fra 2 5 10 Initial programs (np)
1.0 â N= Amat ~ B 0.89 0.0.B. 0.0.B. 0.0.B. 0.84 ââ "= mH =3 = ââ np=5 vo om=5 a ân =10 & Mm=10 = 0.90 | 0.96 0.0.B. 0.0.B. wn 0.6 a â_ ey 8 : 2 x $0.4 & a s F 0.2 fra 0.0 i?) 10 20 30 40 50 2 5 10 Number of programs sampled Initial programs (np) GPT-3.5
# g rd Fa
=
(a) GPT-3.5.
# a i
a 8 7 § =
é ¥ 4 Es 2 BY 8 3 2 5 10 Initial programs (np)
|
2306.09896#24
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 25 |
# g rd Fa
=
(a) GPT-3.5.
# a i
a 8 7 § =
é ¥ 4 Es 2 BY 8 3 2 5 10 Initial programs (np)
1.0 0.8 é Â¥ 4 0.6 Es 2 â M=1 a BY 0.4 === 43 8 â m= vom =5 3 0.2 ânm=10 > M=10 2 â n,=25 0.0 i?) 10 20 30 40 50 5 10 Number of programs sampled Initial programs (np)
(b) GPT-4.
Figure 3: GPT-3.5 and GPT-4 self-repair results on APPS. Left: Mean pass rate vs. number of samples generated. Black line is i.i.d. sampling without repair from the same model. Note that the error bars are often smaller than the markers. Right: Normalized mean pass rate relative to the baseline at an equivalent budget. Cells for which the number of samples exceeds 50 marked O.O.B. (out of bounds).
|
2306.09896#25
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 26 |
Given that the overall efficacy of self-repair is unclear, one might wonder if there are certain conditions in which self-repair is most effective. When we break the APPS problems down by their difficulty level (Appendix B), we find much larger gains on harder problems than on easy problems: GPT-3.5 sees up to a 34% performance gain relative to the baseline on competition-level problems, for example, but no performance gain on introductory-level problems (Figure 12, Appendix B). This would suggest that self-repair is more effective when the modelâs baseline performance is low, and yet we just saw that stronger models (on average) benefit more from self-repair even though their base performance is higher. We conclude that the correlation between program generation and repair success rates makes it difficult to establish a priori which one will overpower the other in a given domain; see Appendix B for a more thorough discussion of this phenomenon.
|
2306.09896#26
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 27 |
While some of the mechanisms behind effective self-repair remain elusive, we do observe a clear trend with respect to the relationship between the hyper-parameters. Given a fixed number of feedback- repairs (nf r), increasing the number of initial programs (np) (i.e., moving right along the x-axis on the heat maps) consistently leads to relative performance gains for all models. On the other hand, fixing np and increasing nf r (i.e., moving up along the y-axis on the heat maps) does not appear to be worth the additional cost incurred, giving very marginal gains at higher budgets and oftentimes even decreasing performance at lower budgets. This suggests that, given a fixed budget, the most important factor determining whether self-repair will lead to a correct program or not is the diversity of the base samples that are generated up-front, rather than the diversity of the repairs sampled. Having more initial samples increases the likelihood of there being at least one program which is close to the ideal program and, hence, can be successfully repaired.
Since nf r = 1 appears to be the best overall choice for the hyper-parameter nf r, we next isolate the effect of the number of initial programs, np, by exploring a denser set of possible values:
6
Preprint. Under review.
|
2306.09896#27
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 28 |
6
Preprint. Under review.
~ 0.79 O.0.B. 0.0.B. 0.0.B. é ¥ 4 xs 2 3 8 3 2 2 5 10 Initial programs (np)
1.0 ~ 0.79 O.0.B. 0.0.B. 0.0.B. 0.8 é Â¥ i 4 wn 0.6 xs 8 2 a ay & â m=l A ong=l 3 5 0.4 ââ m=2 oe =3 8 o ââ m=5 vom =5 3 0.2 ânm=10 > M=10 2 â n,=25 0.0 i?) 10 20 30 40 50 2 5 10 Number of programs sampled Initial programs (np)
# a
=
(a) CodeLlama-13b-instruct.
é ¥ 4 Es 2 x 8 3 2 5 10 Initial programs (np)
1.0 0.8 é Â¥ 4 0.6 Es 2 â np=1 A mal x 04 T â ny=2 <3 8 ââ m=5 vom=5 3 0.24ân,=10 P Mm=10 2 â n,=25 0.0 1 i?) 10 20 30 40 50 5 10 Number of programs sampled Initial programs (np)
# a i
# a 8 a g =
(b) GPT-3.5.
|
2306.09896#28
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 29 |
# a i
# a 8 a g =
(b) GPT-3.5.
Figure 4: CodeLlama-13b-instruct and GPT-3.5 self-repair results on HumanEval. Left: Mean pass rate vs. number of samples generated. Black line is i.i.d. sampling without repair from the same model. Note that the error bars are often smaller than the markers. Right: Normalized mean pass rate relative to the baseline at an equivalent budget. Cells for which the number of samples exceeds 50 marked O.O.B. (out of bounds).
(np, nf r) â {1, 2, ...., 24, 25} Ã {1}. The plots are shown in Figure 5 for both MP = MF â {CodeLlama, GPT-3.5, GPT-4} and the baseline, no-repair approaches. 8 Again, we observe small performance gains only for the stronger models, growing to be somewhat larger at higher budgets but nonetheless remaining relatively modest.
4.2 BOOSTING THE FEEDBACK UNLOCKS PERFORMANCE GAINS FROM REPAIR
Next, we conduct an experiment to test the hypothesis that performance gained from self-repair is limited by the modelâs ability to introspect and debug its own code, since this is the key distinguishing component between code generation and self-repair.
|
2306.09896#29
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 30 |
For this experiment, we set MP to be the weaker model (Code Llama on HumanEval, GPT-3.5 on APPS) and MF to be the stronger model (GPT-3.5 on HumanEval, GPT-4 on APPS). We then vary the hyper-parameters as (np, nf , nr) â {1, 2, ...., 24, 25} Ã {1} Ã {1}, similarly to the previous experiment.9. To keep the computational budget tractable, and since the variance was seen to be very low in the previous experiments, we use Nf = 10 instead of Nf = 25 for this experiment (see Section 3.2).
The results for this experiment are shown in Figure 5a (yellow line) and Figure 5b (bright blue line) for HumanEval and APPS, respectively. Although the exact increases in pass rate differ, we observe consistent trends in both figures: leveraging the stronger model for feedback allows the weaker model
8Note that since nf r is fixed, in these plots, there is a direct correlation between np and k: k = np + np. 9Note that since we are now operating in a setting in which the feedback and repair stages must be separated, we have three hyper-parametersânp, nf , nrâinstead of twoânp, nf r (Section 3.1)
7
|
2306.09896#30
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 31 |
7
Preprint. Under review.
(b) GPT-3.5 and GPT-4 on APPS.
(a) CodeLlama-13b-instruct and GPT-3.5 on Hu- manEval.
Figure 5: Results when nf r (or nf and nr) = 1. Shaded region shows ±1 standard deviation.
Table 1: Success rate of repair with GPT-4âs explanations vs. with those of our human participants.
Difficulty Introductory Interview Competition Overall GPT-4 Feedback Human Feedback 42.64% 62.21% 19.33% 45.67% 3.67% 14.67% 33.30% 52.60%
to break through the performance barrier and become more effective than i.i.d. sampling without repair. This suggests that the textual feedback stage itself is of crucial importance, and that improving it relieves the bottleneck in self-repair.
# 4.3 HUMAN FEEDBACK SIGNIFICANTLY IMPROVES THE SUCCESS RATE OF GPT-4 REPAIR
|
2306.09896#31
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 32 |
# 4.3 HUMAN FEEDBACK SIGNIFICANTLY IMPROVES THE SUCCESS RATE OF GPT-4 REPAIR
For our final experiment, we consider the effect of using an expert human programmerâs feedback when performing repair with very strong models such as GPT-4. The goal of this study is not to do a direct comparison between a human-in-the-loop approach vs. self-repair, since a human-in-the-loop approach imposes more cognitive burden, which we do not study. Instead, our goal is to further investigate how and why feedback quality affects downstream performance in self-repair.
Data collection methodology. We recruit 16 participants and collect a total of 2 human-written pieces of feedback for each of 40 failing programs sampled from GPT-4. Each program is shown to two different participants, to reduce variance caused by participantsâ skill levels and writing style. Participants were asked to spend approximately one hour on the study overall, and were compensated with a $15 gift card. This study was approved by our Institutional Review Board (IRB) and carried out exclusively through an online survey. See Appendix C for more details on the data collection methodology, including a complete copy of the instructions which we provide to our participants.
|
2306.09896#32
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 33 |
Quantitative analysis. Having obtained two human-written pieces of feedback for each program, we sample 25 repair candidates for each (feedback, program)-pair from GPT-4. We condition on the specification, the initial program, and the feedback string; in addition to the feedback collected from our participants, we also try two of GPT-4âs own feedback strings for each program. Finally, we execute all of these candidate repairs against the test bed, and take note of how often they pass.
The results are summarized in Table 1, with a complete task-by-task breakdown in Appendix D. We note that the overall success rate is increased by over 1.57Ã when we replace GPT-4âs own feedback with that of our human participants. Perhaps unsurprisingly, the relative difference increases as the problems get harder, indicating that GPT-4âs ability to produce accurate and useful feedback trails further behind our human participantsâ when the task (and code) becomes more complex.
Qualitative analysis. We manually go through all of GPT-4âs and the participantsâ feedback and note down whether the feedback: (a) seems, at a cursory glance, to be correct, or if it is obviously
8
# Preprint. Under review.
|
2306.09896#33
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 34 |
8
# Preprint. Under review.
inaccurate; (b) explicitly suggests a small change to the code (e.g. "change the condition on line X"); (c) explicitly suggests a large change to the code (e.g. "frame the problem as min-cut instead of shortest-path"); (d) contains blocks of pseudocode or Python (which GPT-4âs feedback never does, per our experiment design); or (e) expresses uncertainty (using phrases such as "unsure", "it appears", etc.).10 Examples of each category are shown in Appendix E. We find that
⢠Almost all human-contributed feedback interleaves natural language with occasional single- statement math/code expressions; only 2/80 responses include pseudocode or explicit Python.
GPT-4âs feedback is much more likely to be inaccurate (32/80 vs. 7/80 for the human feedback).
⢠GPT-4 is more likely to explicitly suggest small changes (54/80 vs. 42/80 for GPT-4 and the participants, respectively; 28/48 vs. 38/73 if we filter out suggestions which are obviously incorrect), while human participants show a slightly greater tendency to suggest high-level changes (23/80 vs. 18/80 for GPT-4; 21/73 vs. 13/48 when seemingly correct).
|
2306.09896#34
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 35 |
Our human participants sometimes express uncertainty (7/80); GPT-4 never does (0/80).
This further analysis suggests that the results in Table 1 are not due to artefacts such as our participants providing explicit code blocks which the model simply copies. Instead, the difference in performance appears to be caused by a combination of more accurate feedback, a greater ability to suggest high- level, large-scale changes to the code when needed, and our participantsâ ability to express their uncertainty (instead of confidently giving potentially inaccurate feedback).
# 5 LIMITATIONS
Firstly, to reduce computational cost, we pre-populate and then sub-sample from a single large repair tree to bootstrap a large number of repair trees for each setting of the hyper-parameters (Section 3.2). This risks introducing statistical artefacts in our analysis. To minimize this risk, we bound np and nf r far below Np and Nf r, respectively, in our self-repair experiments. Furthermore, we note that the standard deviation is very small in our experiments for all values of np and nf r (see the scatter plots in Figures 3, 4), offering increased confidence in our results.
|
2306.09896#35
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 36 |
Secondly, we assume access to an executable suite of unit tests for each task. We do not, for example, require the model to extract tests from textual specifications. While this assumption may seem out of place in the era of chat-style assistants like ChatGPT (OpenAI, 2022), it does align well with established software engineering practices like Test-Driven Development (Astels, 2003). Furthermore, techniques which automatically synthesize test cases given a specification (Li et al., 2022; Chen et al., 2023a) may relieve some of the user burden.
Finally, our study on human data did not track how much time the participants took to debug the programs. As a result, we can only evaluate the quality of the feedback (and the impact this has on repair). Further research at the intersection of Human-Computer Interaction, AI, and program synthesis is needed to explore when and how human intervention should be leveraged, as well as how programming assistants should be designed to facilitate this style of interaction.
# 6 CONCLUSION
|
2306.09896#36
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 37 |
# 6 CONCLUSION
We investigated self-repair for code generation, looking specifically at CodeLLama-13b-instruct, GPT-3.5 and GPT-4 on problems taken from HumanEval and APPS. In a series of experiments, we observed that (1) when the cost of carrying out repair is taken into account, performance gains from self-repair are often modest, vary not only between but also within datasets, and rely on achieving sufficient diversity in the initial programs. Furthermore, by replacing the feedback stage we found that (2) substituting a weaker modelâs own feedback with that of a stronger model significantly improved performance. Finally, we carried out an experiment with human participants, in which we found that (3) replacing GPT-4âs self-generated feedback with feedback provided by an experienced programmer increased the number of repaired programs which pass all unit tests by 57%. Our results suggest that self-repair is not a silver bullet for code generation, and that current models are held back by their inability to reliably produce accurate and useful feedback on why the code is wrong.
10We do not count individual single-line statements/expressions such as âx = 5â as pseudocode or Python.
9
Preprint. Under review.
# ACKNOWLEDGMENTS
|
2306.09896#37
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 38 |
9
Preprint. Under review.
# ACKNOWLEDGMENTS
T.X. Olausson is supported by the Defense Advanced Research Projects Agency (DARPA) under the ASKEM program, award HR00112220042. T.X. Olausson was also supported through a position at Microsoft Research for part of the time period during which this work was carried out. A. Solar- Lezama is supported by the National Science Foundation (NSF) and Intel Corporation through NSF Grant CCF:2217064. This work benefited greatly from discussion with several colleagues at Microsoft Research. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation, the Defense Advanced Research Projects Agency, Intel Corporation, or Microsoft Research.
# REFERENCES
Dave Astels. Test Driven Development: A Practical Guide. Prentice Hall Professional Technical Reference, 2003. ISBN 0131016490.
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. Program Synthesis with Large Language Models, 2021. arXiv preprint arXiv:2108.07732. https://arxiv.org/ abs/2108.07732.
|
2306.09896#38
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 39 |
Johannes Bader, Andrew Scott, Michael Pradel, and Satish Chandra. Getafix: Learning to fix bugs automatically. Proc. ACM Program. Lang., 3(OOPSLA), Oct 2019. doi: 10.1145/3360585.
Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen. CodeT: Code generation with generated tests. In International Conference on Learning Representations, 2023a.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating Large Language Models Trained on Code, 2021. arXiv preprint arXiv:2107.03374. https://arxiv. org/abs/2107.03374.
Xinyun Chen, Chang Liu, and Dawn Song. Execution-Guided Neural Program Synthesis. International Conference on Learning Representations, 2019a. In
|
2306.09896#39
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 40 |
Xinyun Chen, Chang Liu, and Dawn Song. Execution-Guided Neural Program Synthesis. International Conference on Learning Representations, 2019a. In
Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. Teaching Large Language Models to Self-Debug, 2023b. arXiv preprint arXiv:2304.05128. https://arxiv.org/abs/2304. 05128.
Zimin Chen, Steve Kommrusch, Michele Tufano, Louis-Noël Pouchet, Denys Poshyvanyk, and Martin Monperrus. SequenceR: Sequence-to-Sequence Learning for End-to-End Program Repair. IEEE Transaction on Software Engineering, 2019b.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. PaLM: Scaling Language Modeling with Pathways, 2022. arXiv preprint arXiv:2204.02311. https: //arxiv.org/abs/2204.02311.
|
2306.09896#40
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 41 |
Kevin Ellis, Catherine Wong, Maxwell Nye, Mathias Sablé-Meyer, Lucas Morales, Luke Hewitt, Luc Cary, Armando Solar-Lezama, and Joshua B Tenenbaum. DreamCoder: Bootstrapping Inductive Program Synthesis with Wake-Sleep Library Learning. In The International Conference on Programming Language Design and Implementation, 2021.
Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, and Mike Lewis. InCoder: A generative model for code infilling and synthesis. In International Conference on Learning Representations, 2023.
Sumit Gulwani, Oleksandr Polozov, and Rishabh Singh. Program Synthesis. Foundations and Trends® in Programming Languages Series. Now Publishers, 2017. ISBN 9781680832921.
10
Preprint. Under review.
Kavi Gupta, Peter Ebert Christensen, Xinyun Chen, and Dawn Song. Synthesize, Execute and Debug: Learning to Repair for Neural Program Synthesis. In Advances in Neural Information Processing Systems, 2020.
|
2306.09896#41
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 42 |
Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, and Jacob Steinhardt. Measuring Coding Challenge Competence With APPS. In Advances in Neural Information Processing Systems, 2021.
Jeevana Priya Inala, Chenglong Wang, Mei Yang, Andres Codas, Mark Encarnación, Shuvendu Lahiri, Madanlal Musuvathi, and Jianfeng Gao. Fault-Aware Neural Code Rankers. In Advances in Neural Information Processing Systems, 2022.
Emanuel Kitzelmann. Inductive Programming: A Survey of Program Synthesis Techniques. In Approaches and Applications of Inductive Programming: Third International Workshop, 2010.
Sumith Kulal, Panupong Pasupat, Kartik Chandra, Mina Lee, Oded Padon, Alex Aiken, and Percy S Liang. SPoC: Search-based Pseudocode to Code. In Advances in Neural Information Processing Systems, 2019.
Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven Chu Hong Hoi. CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning. In Advances in Neural Information Processing Systems, 2022.
|
2306.09896#42
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 43 |
Claire Le Goues, Michael Pradel, Abhik Roychoudhury, and Satish Chandra. Automatic Program Repair. IEEE Softw., 38(4):22â27, jul 2021. ISSN 0740-7459. doi: 10.1109/MS.2021.3072577.
Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. StarCoder: may the source be with you!, 2023. arXiv preprint arXiv:2305.06161. https://arxiv.org/abs/2305.06161.
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation with AlphaCode. Science, 378(6624):1092â1097, 2022. doi: 10.1126/science.abq1158.
Fan Long and Martin Rinard. Automatic Patch Generation by Learning Correct Code. In ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, 2016.
|
2306.09896#43
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 44 |
Fan Long and Martin Rinard. Automatic Patch Generation by Learning Correct Code. In ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, 2016.
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-Refine: Iterative Refinement with Self-Feedback, 2023. arXiv preprint arXiv:2303.17651. https://arxiv.org/abs/2303. 17651.
Ali Mesbah, Andrew Rice, Emily Johnston, Nick Glorioso, and Edward Aftandilian. DeepDelta: Learning to Repair Compilation Errors. In Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2019.
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis. In International Conference on Learning Representations, 2023.
|
2306.09896#44
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 45 |
OpenAI. Introducing ChatGPT, 2022. Blog post. https://openai.com/blog/chatgpt [Accessed 5/17/2023].
OpenAI. GPT-4 Technical Report, 2023. arXiv preprint arXiv:2303.08774. https://arxiv. org/abs/2303.08774.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, 2022.
Liangming Pan, Michael Saxon, Wenda Xu, Deepak Nathani, Xinyi Wang, and William Yang Wang. Automatically correcting large language models: Surveying the landscape of diverse self-correction strategies. arXiv preprint arXiv:2308.03188, 2023.
11
Preprint. Under review.
|
2306.09896#45
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 46 |
11
Preprint. Under review.
Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, and Jianfeng Gao. Check your facts and try again: Improv- ing large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813, 2023.
Oleksandr Polozov and Sumit Gulwani. FlashMeta: A Framework for Inductive Program Synthesis. In ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications, 2015.
|
2306.09896#46
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 47 |
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, and Gabriel Synnaeve. Code Llama: Open Foundation Models for Code. arXiv preprint arXiv:2308.12950, 2023.
Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, and Sida I. Wang. Natural In Empirical Methods in Natural Language Language to Code Translation with Execution. Processing, 2022.
|
2306.09896#47
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 48 |
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models, 2023. arXiv preprint arXiv:2302.13971. https: //arxiv.org/abs/2302.13971.
Ke Wang, Rishabh Singh, and Zhendong Su. Dynamic Neural Program Embedding for Program Repair. In International Conference on Learning Representations, 2018.
Michihiro Yasunaga and Percy Liang. Graph-based, Self-supervised Program Repair from Diagnostic Feedback. In International Conference on Machine Learning, 2020.
Michihiro Yasunaga and Percy Liang. Break-It-Fix-It: Unsupervised Learning for Program Repair. In International Conference on Machine Learning, 2021.
Kechi Zhang, Zhuo Li, Jia Li, Ge Li, and Zhi Jin. Self-Edit: Fault-Aware Code Editor for Code Gen- eration, 2023. arXiv preprint arXiv:2305.04087. https://arxiv.org/abs/2305.04087.
|
2306.09896#48
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 50 |
12
Preprint. Under review.
Data: Task Ï; sample budgets np, nf , nr Result: A tuple (success, token count) P â [MP (Ï) | i â 0 to np]; t â sum([num_tokens(p) | p â P ]); if any([p |= Ï | p â P ]) then return (True, t); end R â []; for p â P do e â error_msg(p, Ï); Fp â [MF (Ï; p; e) |i â 0 to nf ]; t â t + sum([num_tokens(f ) | f â Fp]); for f â F do Rpf â [MP (Ï; p; e; f ) | i â 0 to nr]; t â t + sum([num_tokens(r) | r â Rpf ]); R â R + Rpf end end if any([r |= Ï | r â R]) then return (True, t); end return (False, t); Algorithm 1: Generating a repair tree T , com- puting T |= Ï and its token count with batched self-repair. All operations should be taken to run in parallel whenever possible.
|
2306.09896#50
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 51 |
Data: Task Ï; sample budgets np, nf , nr Result: A tuple (success, token count) t â 0; for i â 1 to np do pi â MP (Ï); t â t + num_tokens(pi); if pi |= Ï then return (True, t); end ei â error_msg(pi, Ï); for j â 1 to nf do fij â MF (Ï; pi; ei); t â t + num_tokens(fij); for k â 1 to nr do rijk â MP (Ï; pi; ei; fij); t â t + num_tokens(rijk); if rijk |= Ï then return (True, t); end end end end return (False, t); Algorithm 2: Generating a repair tree T , com- puting T |= Ï and its token count with se- quential self-repair. All operations executed serially.
# A ALTERNATIVE EVALUATION STRATEGIES FOR SELF-REPAIR
|
2306.09896#51
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 52 |
# A ALTERNATIVE EVALUATION STRATEGIES FOR SELF-REPAIR
In the main part of this paper, we chose to evaluate self-repair in terms of an adapted version of pass@k (Chen et al., 2021; Kulal et al., 2019), in which a single repair tree is considered equivalent to k = np + np â nf r samples from the baseline. This makes the results easy to digest for practitioners and scholars who are familiar with pass@k, and makes our evaluation strategy easy to relate to that of prior work. However, this evaluation strategy does not account for the feedback tokens produced by the same model, which also come at a cost.
In this appendix, we present results in terms of two alternative evaluation strategies which address the non-uniform costs of program and feedback samples by comparing two dependent variablesâthe pass rate and the number of tokens which had to be sampled from the model in order to achieve itâan approach which we dub pass@t. This way, we are able to compare not only how successful a particular configuration is but also how much "work" it requires from the model.
|
2306.09896#52
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 53 |
Formally, suppose that you are given a dataset D = {Ïd}d and a chosen set of values for the hyper-parameters (MP , MF , np, nf , nr). Let T i d â¼ M (Ïd) denote a repair tree that is sampled as described in Section 3.1 for the task Ïd; let num_tokens(T i d) denote the total number of program and feedback tokens in the repair tree; and say that T i d has at least one leaf program that satisfies the unit tests in the specification Ïd. Then the pass@t metric of this choice of hyper-parameters is defined as the expected pass rate at the number of tokens which you would expect to generate with this choice of hyper-parameters:
E bg~D Ti~M (a) pass@t 4 [T) E va] at t= E wq~D Ti~M (Wa) [num_tokens(T%)|
13
Preprint. Under review.
A.1 BATCHED P A S S@T
|
2306.09896#53
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 54 |
13
Preprint. Under review.
A.1 BATCHED P A S S@T
The first variation we will consider is batched pass@t. In this strategy, repair trees are assumed to be generated as in Algorithm 1: all np initial programs are sampled in parallel, then checked for correctness; if none of them pass, then all np â nf r repairs of all initial programs are sampled in parallel, after which we check if any of the repairs pass. The total number of tokens sampled so far is recorded at every point, and returned alongside the value of T |= Ï. Thus, the number of tokens which are sampled depends on both the success rate in the initial round of program generation as well as the relative verbosity of the feedback and programs. Averaging the results over all of the tasks, we get not only a mean pass rate but also a mean token count, which can be plotted together as points on a curve.
|
2306.09896#54
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 55 |
Figures 6, 7 and 8 show the results of all experiments from main paper, repeated with this evaluation strategy. Note that while these plots may at first look much like those of Section 4 they are subtly different in that both axes are now dependent variables (recall that in pass@k, k is an independent variable set ahead of time). The better a particular model is, the closer it would thus get to (1.0, 0.0)â i.e. the top-left corner of the plot.
Broadly speaking, we observe the same trends as were noted in Section 4: marginal gains for the stronger models, little or no gains for the weaker models unless the feedback is provided by the stronger model; typically better performance when setting np > nf r, except for GPT-3.5 on HumanEval where performance is relatively stable across the board.
SG a a 0.0.B. 0.0.B. El 2 % S : o 2 2 5 10 Initial programs (np)
|
2306.09896#55
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 56 |
SG a a 0.0.B. 0.0.B. El 2 % S : o 2 2 5 10 Initial programs (np)
1.0 â np=1 Ao n=l og | â â¢=2 <4 =3 SG g â m=5 Vvom=5 a £ ânp=10 P& Mn=10 a 0.0.B. 0.0.B. * 0.6 El a â 1p =25 ry 2 2 % £0.4 S i : = o 0.2 2 0.0 T T T T it} 2000 4000 6000 8000 10000 2 5 10 Mean number of tokens generated Initial programs (np)
(a) GPT-3.5.
é i © S @ (7 x 8 2 5 10 Initial programs (np)
1.0 0.8 é £ i £ © * 0.6 S a @ 8 (7 a Np = x § 0.4 Mr =3 8 g mfg 0.2 n= 10 2 0.0 T T T T it} 2000 4000 6000 8000 10000 5 10 Mean number of tokens generated Initial programs (np)
(b) GPT-4.
Figure 6: GPT-3.5 and GPT-4 self-repair results on APPS, evaluated in terms of batched pass@t. C.f. Figure 3.
14
Preprint. Under review.
|
2306.09896#56
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 57 |
14
Preprint. Under review.
0.0.B. 0O.0.B. Feedback-repairs (nj) 2 5 10 Initial programs (np)
1.0 0.8 mn 2 HH i wn 0.6 8 a â n=1 A on=1 g 04 â np=2 one =3 = â m=5 Vonr=5 0.2 â n,=10 > np =10 â n,=25 0.0 T t t t i?) 2000 4000 6000 8000 10000 Mean number of tokens generated 0.0.B. 0O.0.B. Feedback-repairs (nj) 2 5 10 Initial programs (np)
(a) CodeLlama-13b-instruct.
Feedback-repairs (nj) 2 5 10 Initial programs (np)
1.0 om or a 0.8 g i wn 0.6 3 a â n=1 A n=l g 04 ââ np=2 <p =3 = â m=5 vo mn=5 0.2+ânp=10 P& Mn=10 â n,=25 0.0 i?) 2000 4000 6000 8000 10000 Mean number of tokens generated Feedback-repairs (nj) 2 5 10 Initial programs (np)
(b) GPT-3.5.
|
2306.09896#57
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 58 |
(b) GPT-3.5.
Figure 7: CodeLlama-13b-instruct and GPT-3.5 self-repair results on HumanEval, evaluated in terms of batched pass@t. C.f. Figure 4.
(a) CodeLlama and GPT-3.5 on HumanEval. (b) GPT-3.5 and GPT-4 on APPS.
Figure 8: Batched pass@t curves for each model when nf r (or nf and nr) = 1. C.f. Figure 5.
15
Preprint. Under review.
A.2 SEQUENTIAL P A S S@T
In this section, we model self-repair as a depth-first search for a passing program, where the parameters np, nf , nr are taken to be bounds on the widths of each level; this is shown in Algorithm 2. This is meant to model a familiar chat-style user experience, where the user is provided with a single response and then spends some time trying to get the model to fix it. Note that this even more tightly couples the observed pass rates and the number of tokens generated: if the pass rate is high, a passing program will quickly be found and the number tokens generated will be low, and vice versa.
|
2306.09896#58
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 59 |
We again repeat the experiments from the main paper: the results are shown in Figures 9, 10, 11. As before, the key trends are still discernible. However, in this setting, self-repair appears to perform significantly worse. This is particularly visible when comparing the heatmaps in Figures 9 and 10 to those from before (e.g., 6, 7), as well as Figure 11. Although the evaluation strategy used in the main paper appears to favor self-repair slightly less than that in Section A.1, these results paint an even less impressive picture of self-repair.
= ~ ra £ g © 2 8 8 oO ia 2 5 10 Initial programs (np)
1.0 A m= 1 0.8 tes 3 = ve Vvom=5 ~ £ ra i > m= 10 £ g 0-6 g ry © a Cy] 9 2 5 0.4 rq = 8 5 ¥ 8 = oO 0.2 ia 0.0 1 1 1 1 | 0 2000 4000 6000 8000 10000 2 5 10 Mean number of tokens generated Initial programs (np)
(a) GPT-3.5.
é 2 é x 8 3 ia 2 5 10 Initial programs (np)
|
2306.09896#59
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 60 |
(a) GPT-3.5.
é 2 é x 8 3 ia 2 5 10 Initial programs (np)
1.0 0.8 mi é 2 3 : a © 2 2 0.6 z é a â m=1 a x © g 0-4 â n,=2 < 8 = ââ m=5 v 3 0.2 ân=10 ia â m=25 0.0 1 1 1 1 0 2000 4000 6000 8000 10000 2 5 10 Mean number of tokens generated Initial programs (np)
(b) GPT-4.
Figure 9: GPT-3.5 and GPT-4 self-repair results on APPS, evaluated in terms of sequential pass@t. C.f. Figure 3.
16
Preprint. Under review.
= iS s g a ¢ & 8 3 2 2 5 10 Initial programs (np)
1.0 = 0.8 iS =| s Fd x) a} om, = He g © 6 faa a a aay © ¢ & EB) =l ak mal & g 04 ââ np=2 on =3 8 = ââ mp=5 vom=5 3 0.2 ânp=10 & M=10 2 â n=25 0.0 T T T T it} 2000 4000 6000 8000 10000 2 5 10 Mean number of tokens generated Initial programs (np)
|
2306.09896#60
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 61 |
(a) CodeLlama-13b-instruct.
iS fa 4 S 2 3 8 3 2 2 5 10 Initial programs (np)
1.0 > 0.8 iS fa © 4 ao 0.6 S 8 2 a â mp =1 A oneal 3 g 04 T â ny=2 <0 m=3 8 â mp=5 vo m=5 3 0.24ân,=10 P Mm4=10 2 â n,=25 0.0 0 2000 4000 6000 8000 10000 2 5 10 Mean number of tokens generated Initial programs (np)
£
=
(b) GPT-3.5.
Figure 10: CodeLlama-13b-instruct and GPT-3.5 self-repair results on HumanEval, evaluated in terms of sequential pass@t. C.f. Figure 4.
(a) CodeLlama and GPT-3.5 on HumanEval. (b) GPT-3.5 and GPT-4 on APPS.
Figure 11: Sequential pass@t curves for each model when nf r (or nf and nr) = 1. C.f. Figure 5.
17
Preprint. Under review.
# B SELF-REPAIR VS. PROBLEM DIFFICULTY
|
2306.09896#61
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 62 |
17
Preprint. Under review.
# B SELF-REPAIR VS. PROBLEM DIFFICULTY
The observations we make in Section 4 invite a tempting hypothesis: effective self-repair simply requires sufficient baseline performance, which is why GPT-4 can do self-repair on APPS (but not GPT-3.5) and GPT-3.5 can do self-repair on HumanEval (but not CodeLLama-13b-instruct). However, as we will show in this appendix, things actually appear to be a bit more complicated than that.
|
2306.09896#62
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 63 |
introductory, interview and APPS problems are divided into three categories: competition. This makes it easy to repeat our APPS experiments on problems of a specific difficulty; the results are shown in Figures 12, 13 and 14. These results clearly contradict the previous supposition that successful self-repair is simply a function of strong baseline performance; both GPT-3.5 and GPT-4 in fact appear to benefit more from self-repair the harder the problem is. To in- vestigate this further, we calculate the fraction of repairs generated which pass the tests; this evaluates repair without the confounding factor of how often the initial sample of programs passes the tests without having to go through repair. Table 2 shows the results. Although it is important not to place too much weight on the specific numbers, sinceâfor exampleâa less performant modelâs initial programs might be more difficult to repair than those generated by a stronger model, these results do suggest that the success rate of repair in fact gets lower the harder the task is (as one would intuitively expect, but in seeming contradiction to self-repair being more beneficial on APPS-competition than on APPS-introductory).
|
2306.09896#63
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 64 |
How can this be? The key realization to arrive at is that there are two competing factors in self-repair: the modelâs ability to generate code (which benefits i.i.d. sampling without repair) and its ability to debug and repair it (which benefits self-repair). These trend together, and it is not obvious a priori which factor will outweigh the other for a given dataset. This is further echoed by noting, for example, that GPT-3.5âs baseline performance on APPS-introductory problems (Figure 12, top) is very similar to that of GPT-3.5 on HumanEval (Figure 4b), yet self-repair only appears fruitful in the latter experiment.
|
2306.09896#64
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 65 |
It thus seems that whether or not self-repair will benefit performance cannot be reduced to something as simple as the modelâs baseline performance on the task. We leave it to future work to investigate in detail why this is; we offer the conjecture that it is due to a combination of (a) the power struggle between feedback generation and repair success rate (which benefit self-repair) vs. program generation success rate (which benefits i.i.d. sampling without repair); (b) the prevalence of ambiguity in the natural language specification, which might affect self-repairâs ability to correctly identify flaws in a failing program; and (c) the informativeness of the unit tests. In the meantime, as has been shown in this work, improving the modelâs ability to provide feedback on code (e.g. through finetuning on code explanation data) can boost performance gained through self-repair.
Table 2: Repair success rates in various settings. The repair success rate is computed as number_of_passing_repairs / total_number_of_repairs_sampled.
|
2306.09896#65
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 66 |
Table 2: Repair success rates in various settings. The repair success rate is computed as number_of_passing_repairs / total_number_of_repairs_sampled.
Dataset Difficulty Model Repair Success Rate introductory GPT-3.5 GPT-3.5+GPT-4 GPT-4 13.7% 25.8% 28.8% APPS interview GPT-3.5 GPT-3.5+GPT-4 GPT-4 4.2% 8.9% 8.7% competition GPT-3.5 GPT-3.5+GPT-4 GPT-4 1.2% 2.9% 8.6% overall GPT-3.5 GPT-3.5+GPT-4 GPT-4 4.7% 9.6% 10.8% HumanEval - CodeLLama CodeLlama+GPT-3.5 GPT-3.5 0.4% 17.6% 21.9%
18
Preprint. Under review.
z 4 g ro = x 8 s 3 2 6 3 2 aa % ° * ~â Number of initial programs (np)
|
2306.09896#66
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 67 |
1.0 z 0.8 4 ij [a g ro 2 tf = foe x 8 8 8 s e 3 5 04) N= A Mr=1 2 = â m= <n =3 6 â n= voonp=5 3 0.2 Np = 10 > nr=10 2 â mp =25 0.0 s BS * * © re aa % ° * ~â Number of programs sampled (k = np +n) Number of initial programs (np) 1.0 ââ m=1 a ââ m=2 < = 0.8 4 ââ m=5 v £ = ro 2 Np = 10 > n-=10 3 £06 â m=25 x fa Fs 8 8 e 3 c Bo. g = a Fa v 5 3 0.2 E 2 0.0 s BS * * © re aa % ° * ~â Number of programs sampled (k = np +n) Number of initial programs (np) 1.0 ââ m=1 A Ng=1 os ââ m=2 <0 ong =3 S : ââ m=5 von=5 xg = Fo 2 Np = 10 > np =10 $ £06 â m=25 3 a 8 8 3 c 2 B04 2 = 6 5 8 0.2 E 2 0.0 s BS * * © re aa % ° * ~â Number of programs
|
2306.09896#67
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 69 |
= 4 £ ro 3 x Fs 8 3 g Fa 5 3 E 2 aa % ° * ~â Number of initial programs (np)
S xg Fo $ 3 8 3 2 2 6 5 8 E 2 aa % ° * ~â Number of initial programs (np)
Figure 12: GPT-3.5 results from Figure 3 (Section 4.1) per APPS difficulty (row), from top to bottom: introductory, interview, and competition.
19
Preprint. Under review.
= < Fs g BY s 3 & S 6 5 2 2 aa v ° » ~â Number of initial programs (np)
|
2306.09896#69
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 70 |
1.0 1 = 0.8 < Fs 2 g fo6 BY 8 s Es 3 5 o.a4 â = A M=1 & 8 S = â n= <4 ne =3 6 = 5 02 â n= vor - 2 â nmp=10 > n=l 2 â mp =25 0.0 ° - * & © & aa v ° » ~â Number of programs sampled (k = np + nx) Number of initial programs (np) 1.0 0.8 = 3S g i fo6 BY g 8 Es 3 504 â = A neal 2 = ââ m= <0 ong =3 % = 5 02 â m= vonr=5 2 â np=10 > n-=10 3 â m=25 0.0 ° - * & © & aa v ° » ~â Number of programs sampled (k = np + nx) Number of initial programs (np) 1.0 â M=1 A n=l os ââ m=2 4 ng =3 gs ââ m=5 vom=5 g = Fo 2 Np = 10 > ny-=10 $ ro 7 £06 â mp=25 3 a 1 3 F 4 3 i i 0.2 3 0.0 ° - * & © & aa v ° » ~â Number of programs sampled (k = np + nx) Number of
|
2306.09896#70
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 73 |
1.0 0.8 ° a ° b Mean pass rate ââ Mp= GPT-4 (no repair) ââ Mp=Mp= GPT-4 0.2 | ââ Mp= GPT-3.5 (no repair) ââ Mp=Mp= GPT-3.5 ââ Mp= GPT-3.5; Mp= GPT-4 0.0 ° © â eS Ry & Number of programs sampled (k= Np +i) 1.0 0.8 ° a ° b Mean pass rate â Mp= GPT-4 (no repair) ââ Mp=Mp= GPT-4 o24f ââ Mp= GPT-3.5 (no repair) â Mp=Mp= GPT-3.5 ââ Mp= GPT-3.5; Mp= GPT-4 0.0 ° ° = i $ Number of programs sampled (k= np + Nr) 1.0 ââ Mp= GPT-4 (no repair) ââ Mp=Mp= GPT-4 08 ââ Mp= GPT-3.5 (no repair) ââ Mp=Mp= GPT-3.5 ââ Mp= GPT-3.5; Mp= GPT-4 ° a ° b Mean pass rate 0.2 0.0
|
2306.09896#73
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 75 |
Participants. We recruit 16 participants, consisting of 15 graduate students and 1 professional machine learning engineer. Participants were told to spend approximately one hour on the study overall, and were compensated with a $15 gift card. Data collection. We first sample 20 tasks {Ïi}20 i=1 from the APPS test set; to make the data collection process less time-consuming for the participants of the study, we skew the distribution towards easier tasks (14 introductory; 3 interview; 3 competition). For each task Ïi, we then sample two failing GPT-4 completions pi,1, pi,2, making for a total of 20 · 2 = 40 programs to refine. Each participant is provided with five different base programs based on their level of experience with Python and competitive programming. Programs are taken from distinct tasks; participants are never showed two different programs belonging to the same task. Participants are then asked to explain, in their own words, what the program is doing wrong. To reduce the cognitive load for participants, each program pi,j is accompanied by the error message ei,j and two feedback strings fi,j,1, fi,j,2 sampled from GPT-4. We obtain these feedback
|
2306.09896#75
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 76 |
is accompanied by the error message ei,j and two feedback strings fi,j,1, fi,j,2 sampled from GPT-4. We obtain these feedback strings by randomly sampling from the feedback-repair pairs used in the previous experiments and removing the code block. Note that each of the 40 programs will be shown to two different participants, to reduce variance caused by participantsâ skill levels and writing style. This human data collection was approved by our Institutional Review Board (IRB) and carried out exclusively through an online survey.
|
2306.09896#76
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 77 |
Instructions. Participants were given a slide deck with instructions. The following ten images show the instructions, which include an example of a task shown to a participant:
Tasks + Setup Ig five incorrect Python programs © Task format
Your Answer tools: pen
Example
1. Problem Specification
22
Preprint. Under review.
2. Incorrect Program Incorrect Program New, you will be.3h ne incorrect Program. from cellections taport cetesitetet vet = Sormatticit Tip: If you are struggling with debugging the Program, try running it on your machine! Note: the programs handle inputs through 7 ane ae oe ânouâ, pst outputs Socuch "wristir " A tannaniry m2 and mam meted > Seca voted 14 . hee a seni tase fet ram
Incorrect Program New, you will be.3h ne incorrect Program. from cellections taport cetesitetet vet = Sormatticit Tip: If you are struggling with debugging the Program, try running it on your machine! Note: the programs handle inputs through 7 ane ae oe ânouâ, pst outputs Socuch "wristir " A tannaniry m2 and mam meted > Seca voted 14 . hee a seni tase fet ram
|
2306.09896#77
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 78 |
4. Model Explanations Model Explanations To help you get staried with the debugging, each page ists two example explanations. Opbnsten 1 The concive exptanwion of te waves: The code ctechs if We meena noms are aaa great mn tat of These explanations are generated by the the total votes to deter mine if there's a rmajeety Moweves it theuici chack the mecdnien votes ars greater thay model itself, They might be c letely " © equ fo bell cf Ube Lote! wotes. wrong. You dont irae : Sxpbraten 2 The following bs # ooacra eqfensten of the tears The code checes for 6 single macety by comparing Think of these like CoPilot suggestions Max_VOnes Ww Tetel_y cles VY 2 which wil maue the floor Civision resut This meats ter f roses Kh acgel to Pal of the detaLevies | wil ect be conmdered s tingle maprity, Instees, De Conction shoul creck It fr@_yokes & sricty greater thes half ol the "wtl_voR
|
2306.09896#78
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 79 |
Model Explanations To help you get staried with the debugging, each page ists two example explanations. Opbnsten 1 The concive exptanwion of te waves: The code ctechs if We meena noms are aaa great mn tat of These explanations are generated by the the total votes to deter mine if there's a rmajeety Moweves it theuici chack the mecdnien votes ars greater thay model itself, They might be c letely " © equ fo bell cf Ube Lote! wotes. wrong. You dont irae : Sxpbraten 2 The following bs # ooacra eqfensten of the tears The code checes for 6 single macety by comparing Think of these like CoPilot suggestions Max_VOnes Ww Tetel_y cles VY 2 which wil maue the floor Civision resut This meats ter f roses Kh acgel to Pal of the detaLevies | wil ect be conmdered s tingle maprity, Instees, De Conction shoul creck It fr@_yokes & sricty greater thes half ol the "wtl_voR
|
2306.09896#79
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 80 |
Study Tips We are very grateful for your help! Se @ Make sure you understand the task first! The programs have subtle logic errors, not just simple compiler errors. e =6Try to write clear and concise explanations, with proper grammar and punctuation. © Feel free to use (or not use) the model explanations when viiting your answers; but make sure your answer is self-contained! ¢ The tasks vary in difficulty. Feel free to allocate your time as you see fit; we are not measuring how quickly you complete the tasks or anything like that! ¢ Feel free to use external tools: Use per an paper or a whiteboard to hetp you reason eHout the task at hand. Use a Python IDE to execute and debug the code. Search onvine for help. ¢ Have a question? asi fBProe moving on with the study! [â)
|
2306.09896#80
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 81 |
3. Error Message Error Tae our eameurteg te cate C0 pear macticn, eng ypnete the lnpet (ie 6 fie and pie Pw te pee wet pytbocd progvan gy * mppalod ven Pee freee The error message shows you Me test that the program hari Gewaes corres failed on. roms - adored Kt contains: Cuma Fregyen e =Anexample input Gace Seeger ii Race sore * The program's incorrect output Baten Senne e The expected output Cetess Serre Tip: try copy-pasting the input to a file and piping it to the Pregen Ostet program
Error our eameurteg te cate C0 pear macticn, eng ypnete the lnpet (ie 6 fie and pie Pw te pee wet pytbocd progvan gy * mppalod ven freee The error message hari Gewaes corres failed on. roms - adored Kt contains: Cuma Fregyen e =Anexample Seeger sore * The program's Baten Senne e The expected Cetess Serre Tip: try copy-pasting Pregen Ostet program
|
2306.09896#81
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 82 |
5. Answer Form Your Explanation Finally, each page contains an embedded Google Form. No login is required 3 print Googe w save cur progeese Leas nore Submit your explanation of what the program is doing wrong. 1 Ue OS ened weet Your answer must be self-contained. oer Diegd anaticn * ® should not be of the form âJust tke the first modal explanation describes, the issue with the code is that ...~ Vas sane
FAQ e Are you collecting data as | visit the website? No - none at all. Only your final answers are recorded. e What is the point of the study? o Toinvestigate how much better the models are at fixing code when given human feedback, instead of having to debug the code themselves. e Are you evaluating how useful the model explanations were to me? No - they are just there to help you get started with the debugging. We only care about your final answer.
23
Preprint. Under review.
# D HUMAN EXPERIMENT (QUANTITATIVE ANALYSIS): RESULTS PER TASK
|
2306.09896#82
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 83 |
23
Preprint. Under review.
# D HUMAN EXPERIMENT (QUANTITATIVE ANALYSIS): RESULTS PER TASK
In the table below, we give a complete breakdown of the quantitative results presented in Section 4.3. Note that each program is associated with four different pieces of feedback: two sampled from GPT-4, and two given by our human participants. Each cell is the number of repair candidates (out of 25) that passed all the unit tests. See Section 4.3 for details, as well as Appendix C for the instructions given to participants.
|
2306.09896#83
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 84 |
Task Difficulty Program GPT-4 #1 GPT-4 #2 Human #1 Human #2 2106 interview A B 7 0 10 2 10 20 0 16 2673 interview A B 4 3 7 25 17 25 24 25 2923 interview A B 0 0 0 0 0 0 0 0 3070 competition A B 0 3 0 0 0 5 0 0 3286 competition A B 2 0 6 0 10 0 25 4 3754 competition A B 0 0 0 0 0 0 0 0 4182 introductory A B 25 25 25 0 25 25 24 25 4195 introductory A B 25 23 3 25 24 25 23 25 4281 introductory A B 0 0 4 0 0 0 0 0 4333 introductory A B 25 23 0 24 25 24 0 25 4347 introductory A B 0 0 0 0 7 25 25 25 4426 introductory A B 25 25 25 25 25 25 25 25 4450 introductory A B 0 24 0 0 0 22 0 24 4507 introductory A B 0 0 0 0 0 1 0 0 4514 introductory A B 15 0 21 0 1 25 16 0 4704 introductory A B 0 25 25 25 0 24 25 23 4741 introductory A B 25 25 25 25 25 25 25 25 4855 introductory A B 0 0 1 2 17 3 25 23 4873 introductory A B 0 0 0 0 0 0 0 18 4952 introductory A B 0 24 0 8 2 24 25 21
24
Preprint. Under review.
|
2306.09896#84
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 85 |
24
Preprint. Under review.
# E HUMAN EXPERIMENT (QUALITATIVE ANALYSIS): EXAMPLES
In this appendix, we give examples of each category used to classify the responses in the qualitative analysis of Section 4.3. For each category, we give one example from the human participants and one from GPT-4 (when possible). Each example contains: the textual specification of the task; the incorrect program; the error message (in the format it was shown to participants); the feedback in question; and a short comment motivating its classification.
# E.1 CATEGORY (A): OBVIOUSLY INCORRECT FEEDBACK
E.1.1 GPT-4: EXAMPLE
### SPECIFICATION The recent schoolboard elections were hotly contested: a proposal to swap school start times for elementary
and high school students, a controversial new dress code proposal that bans athletic clothes in school, and a proposal to raise real-estate taxes to pay for a new football practice facility, and the list goes on and on. It is now hours after the polls have closed and a winner has yet to emerge!
|
2306.09896#85
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 86 |
-----Examples----- Sample Input: Penny Franklin Marti Graham Connie Froggatt Joseph Ivers Connie Froggatt Penny Franklin Connie Froggatt Bruce Stanger Connie Froggatt Barbara Skinner Barbara Skinner *** Sample Output: Connie Froggatt ### INCORRECT PROGRAM from collections import defaultdict votes = defaultdict(int) candidate = input() while candidate != "***": votes[candidate] += 1 candidate = input() total_votes = sum(votes.values()) max_votes = max(votes.values()) winner = [name for name, count in votes.items() if count == max_votes] if len(winner) == 1 and max_votes > total_votes // 2: print(winner[0]) else: print("Runoff!") ### ERROR MESSAGE ===Input=== Penny Franklin Marti Graham Connie Froggatt Joseph Ivers Connie Froggatt Penny Franklin Connie Froggatt Bruce Stanger Connie Froggatt Barbara Skinner Barbara Skinner *** ===Program Output=== Runoff! ===Expected Output=== Connie Froggatt
In their desperation, the election officials turn to you and ask you to write a program to count the vote!
|
2306.09896#86
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 87 |
In their desperation, the election officials turn to you and ask you to write a program to count the vote!
-----Input----- The input consists of a single test case, which is a list of votes cast. Each line in the input contains the name of a candidate for whom a vote was cast. A name may consist of multiple words, separated by spaces. Words contain letters or hyphens, but no other punctuation characters. There will be at least votes on the list. The list of votes ends with a single line containing the characters ***. This line should not be counted. There can be up to valid votes.
-----Output----- If a candidate obtained a simple or absolute majority of all votes cast (that is, more than any other
candidate), output the name of this candidate! If no candidate obtained a simple majority, output: "Runoff!" (don't forget to include the exclamation mark!)
25
# Preprint. Under review.
### FEEDBACK The concise explanation of the issue is: The code checks if the maximum votes are strictly greater than half
of the total votes to determine if there's a majority. However, it should check if the maximum votes are greater than or equal to half of the total votes.
### COMMENT The issue with the code is not that the ">" should have been a ">=", but rather that it is not allowing for a
simple majority.
# E.1.2 HUMAN: EXAMPLE
|
2306.09896#87
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
2306.09896
| 88 |
### SPECIFICATION Snuke and Raccoon have a heap of N cards. The i-th card from the top has the integer a_i written on it. They will share these cards. First, Snuke will take some number of cards from the top of the heap, then Raccoon will take all the remaining cards. Here, both Snuke and Raccoon have to take at least one card. Let the sum of the integers on Snuke's cards and Raccoon's cards be x and y, respectively. They would like to minimize |x-y|. Find the minimum possible value of |x-y|. -----Constraints----- - 2 \leq N \leq 2 imes 10^5 - -10^{9} \leq a_i \leq 10^{9} - a_i is an integer. -----Input----- Input is given from Standard Input in the following format: N a_1 a_2 ... a_{N} -----Output----- Print the answer. -----Sample Input----- 6 1 2 3 4 5 6 -----Sample Output----- 1 If Snuke takes four cards from the top, and Raccoon takes the remaining two cards, x=10, y=11, and thus |x-y|=1. This is the minimum possible value. ### INCORRECT PROGRAM def main(): n =
|
2306.09896#88
|
Is Self-Repair a Silver Bullet for Code Generation?
|
Large language models have shown remarkable aptitude in code generation, but
still struggle on challenging tasks. Self-repair -- in which the model debugs
and fixes mistakes in its own code -- has recently become a popular way to
boost performance in these settings. However, only very limited studies on how
and when self-repair works effectively exist in the literature, and one might
wonder to what extent a model is really capable of repairing mistakes in code
which was originally generated by that very same model. In this paper, we
analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on
problems taken from HumanEval or APPS, finding that when the cost of carrying
out repair is taken into account, gains are often modest, vary significantly
between subsets of the data, and are sometimes not present at all. We
hypothesize that this is because self-repair is bottlenecked by the model's
ability to provide feedback on its own code; boosting the feedback with
stronger models, we observe performance gains even in settings where the model
does not benefit from self-repair. Finally, we find that providing the model
with feedback from human participants greatly benefits repair even for GPT-4,
and carry out a brief qualitative analysis of the differences observed.
|
http://arxiv.org/pdf/2306.09896
|
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
|
cs.CL, cs.AI, cs.PL, cs.SE
|
Added experiments for HumanEval (dataset) and Code Llama (model)
| null |
cs.CL
|
20230616
|
20231017
|
[
{
"id": "2211.16490"
},
{
"id": "2302.13971"
},
{
"id": "2308.12950"
},
{
"id": "2305.04087"
},
{
"id": "2204.02311"
},
{
"id": "2107.03374"
},
{
"id": "2305.06161"
},
{
"id": "2308.03188"
},
{
"id": "2108.07732"
},
{
"id": "2304.05128"
},
{
"id": "2303.17651"
},
{
"id": "2302.12813"
},
{
"id": "2303.08774"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.