doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2306.09896
89
cards, x=10, y=11, and thus |x-y|=1. This is the minimum possible value. ### INCORRECT PROGRAM def main(): n = int(input()) a = list(map(int, input().split())) Snuke_sum = 0 Raccoon_sum = sum(a) min_diff = abs(Raccoon_sum) for i in range(n-1): Snuke_sum += a[i] Raccoon_sum -= a[i] diff = abs(Snuke_sum - Raccoon_sum) min_diff = min(min_diff, diff) print(min_diff) main() ### ERROR MESSAGE ===Input=== 2 10 -10 ===Program Output=== 0 ===Expected Output=== 20 ### FEEDBACK The code initializes the result `min_diff` to `abs(Raccoon_sum)` corresponding to the case where Raccoon takes all the cards, however this is incorrect because of the constraint "Snuke and Raccoon have to take at least one card". Instead just setting `min_diff` to negative infinity initially would fix it. ### COMMENT This human participant was on the right track, but note that setting `min_diff` to *negative* infinity would cause it to never be updated inside of the
2306.09896#89
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
92
### SPECIFICATION There is a square in the xy-plane. The coordinates of its four vertices are (x_1,y_1),(x_2,y_2),(x_3,y_3) and (x_4,y_4) in counter-clockwise order. (Assume that the positive x-axis points right, and the positive y-axis points up.) Takahashi remembers (x_1,y_1) and (x_2,y_2), but he has forgot (x_3,y_3) and (x_4,y_4). Given x_1,x_2,y_1,y_2, restore x_3,y_3,x_4,y_4. It can be shown that x_3,y_3,x_4 and y_4 uniquely exist and have integer values. -----Constraints----- - |x_1|,|y_1|,|x_2|,|y_2| \leq 100 - (x_1,y_1) != (x_2,y_2) - All values in input are integers. -----Input----- Input is given from Standard Input in the following format: x_1 y_1 x_2 y_2 -----Output----- Print
2306.09896#92
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
93
- All values in input are integers. -----Input----- Input is given from Standard Input in the following format: x_1 y_1 x_2 y_2 -----Output----- Print x_3,y_3,x_4 and y_4 as integers, in this order. -----Sample Input----- 0 0 0 1 -----Sample Output----- -1 1 -1 0 (0,0),(0,1),(-1,1),(-1,0) is the four vertices of a square in counter-clockwise order. Note that (x_3,y_3)=(1,1),(x_4,y_4)=(1,0) is not accepted, as the vertices are in clockwise order. ### INCORRECT PROGRAM x1, y1, x2, y2 = map(int, input().split()) x3 = x2 + (y2 - y1) y3 = y2 - (x2 - x1) x4 = x1 + (y2 - y1) y4 = y1 - (x2 - x1) print(x3, y3, x4, y4) ### ERROR MESSAGE ===Input=== 0 0 0 1 ===Program Output=== 1 1 1 0 ===Expected Output=== -1 1 -1 0 ### FEEDBACK The issue with the code is that it doesn't consider
2306.09896#93
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
94
0 0 0 1 ===Program Output=== 1 1 1 0 ===Expected Output=== -1 1 -1 0 ### FEEDBACK The issue with the code is that it doesn't consider the counter-clockwise order of the vertices. To fix this, we need to swap the signs of (y2 - y1) and (x2 - x1) in the calculations. ### COMMENT Swapping the listed signs is a concrete suggestion of a way to fix the code. Since the overall structure of the code would remain the same, we count this as a minor change.
2306.09896#94
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
95
# E.2.2 HUMAN: EXAMPLE ### SPECIFICATION Given is a string S representing the day of the week today. S is SUN, MON, TUE, WED, THU, FRI, or SAT, for Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, and Saturday, respectively. After how many days is the next Sunday (tomorrow or later)? -----Constraints----- - S is SUN, MON, TUE, WED, THU, FRI, or SAT. -----Input----- Input is given from Standard Input in the following format: S -----Output----- Print the number of days before the next Sunday. -----Sample Input----- SAT -----Sample Output----- 1 It is Saturday today, and tomorrow will be Sunday. ### INCORRECT PROGRAM S = input() days = ['SUN', 'MON', 'TUE', 'WED', 'THU', 'FRI', 'SAT'] print((7 - days.index(S)) % 7) ### ERROR MESSAGE ===Input=== SUN 27 Preprint. Under review. ===Program Output=== 0 ===Expected Output=== 7 ### FEEDBACK The logic is incorrect. There should be no modulo operator. ### COMMENT Suggesting that the modulo operator should be removed is a relatively small, but explicit, change.
2306.09896#95
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
97
and N-1 connecting components. Parts are distinguishable, but connecting components are not. Part i has d_i holes, called Hole 1, Hole 2, ..., Hole d_i, into which a connecting component can be inserted. These holes in the parts are distinguishable. Each connecting component will be inserted into two holes in different parts, connecting these two parts. It is impossible to insert multiple connecting components into a hole. The character figure is said to be complete when it has the following properties: - All of the N-1 components are used to connect parts. - Consider a graph with N vertices corresponding to the parts and N-1 undirected edges corresponding to the pairs of vertices connected by a connecting component. Then, this graph is connected. Two ways A and B to make the figure complete are considered the same when the following is satisfied: for every pair of holes, A uses a connecting component to connect these holes if and only if B uses one to connect them. Find the number of ways to make the figure complete. Since the answer can be enormous, find the count modulo 998244353. -----Constraints----- - All values in input are integers. - 2 \leq N \leq 2 imes 10^5 - 1 \leq d_i <
2306.09896#97
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
98
-----Constraints----- - All values in input are integers. - 2 \leq N \leq 2 imes 10^5 - 1 \leq d_i < 998244353 -----Input----- Input is given from Standard Input in the following format: N d_1 d_2 \cdots d_N -----Output----- Print the answer. -----Sample Input----- 3 1 1 3 -----Sample Output----- 6 One way to make the figure complete is to connect Hole 1 in Part 1 and Hole 3 in Part 3 and then connect Hole 1 in Part 2 and Hole 1 in Part 3. ### INCORRECT PROGRAM def main(): N = int(input()) d = list(map(int, input().split())) MOD = 998244353 fact = [1] * (N+1) for i in range(1, N+1): fact[i] = fact[i-1] * i % MOD ans = 0 for ki in range(N): ans += d[ki] * fact[ki] % MOD * fact[N-ki-1] % MOD ans %= MOD ans = ans * fact[N-1] % MOD print(ans) main() ### ERROR MESSAGE ===Input=== 3 1 1 3 ===Program Output=== 18 ===Expected Output=== 6 ### FEEDBACK
2306.09896#98
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
99
28 # Preprint. Under review. The following is a concise explanation of the issue: The code above calculates the total number of ways to connect the parts, but it does not ensure that each connecting component is used only once. This results in overcounting the number of ways to make the figure complete. To fix the issue, we need to ensure that each connecting component is used exactly once. ### COMMENT The feedback suggests changing the code so that each connecting component is used exactly once, which does not seem to be taken into account in the current approach. Changing the code to satisfy this part of the specification seems to require a major rewrite. # E.3.2 HUMAN: EXAMPLE ### SPECIFICATION Oh, New Year. The time to gather all your friends and reflect on the heartwarming events of the past year...
2306.09896#99
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
100
$n$ friends live in a city which can be represented as a number line. The $i$-th friend lives in a house with an integer coordinate $x_i$. The $i$-th friend can come celebrate the New Year to the house with coordinate $x_i-1$, $x_i+1$ or stay at $x_i$. Each friend is allowed to move no more than once. For all friends $1 \le x_i \le n$ holds, however, they can come to houses with coordinates $0$ and $n+1$ (if their houses are at $1$ or $n$, respectively). For example, let the initial positions be $x = [1, 2, 4, 4]$. The final ones then can be $[1, 3, 3, 4]$, $[0, 2, 3, 3]$, $[2, 2, 5, 5]$, $[2, 1, 3, 5]$ and so on. The number of occupied houses is the number of distinct positions among the final ones. So all friends choose the moves they want to perform. After that the number of occupied houses is calculated. What is the minimum and the maximum number of occupied houses can there be? -----Input----- The first line contains a single integer $n$ ($1 \le n
2306.09896#100
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
101
occupied houses is calculated. What is the minimum and the maximum number of occupied houses can there be? -----Input----- The first line contains a single integer $n$ ($1 \le n \le 2 \cdot 10^5$) - the number of friends. The second line contains $n$ integers $x_1, x_2, \dots, x_n$ ($1 \le x_i \le n$) - the coordinates of the houses of the friends. -----Output----- Print two integers - the minimum and the maximum possible number of occupied houses after all moves are performed. -----Examples----- Input 4 1 2 4 4 Output 2 4 Input 9 1 1 8 8 8 4 4 4 4 Output 3 8 Input 7 4 3 7 1 4 3 3 Output 3 6 -----Note----- In the first example friends can go to $[2, 2, 3, 3]$. So friend $1$ goes to $x_1+1$, friend $2$ stays at his house $x_2$, friend $3$ goes to $x_3-1$ and friend $4$ goes to $x_4-1$. $[1, 1, 3, 3]$, $[2, 2, 3, 3]$ or $[2, 2, 4, 4]$ are also all valid options to obtain $2$ occupied
2306.09896#101
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
102
3, 3]$, $[2, 2, 3, 3]$ or $[2, 2, 4, 4]$ are also all valid options to obtain $2$ occupied houses. For the maximum number of occupied houses friends can go to $[1, 2, 3, 4]$ or to $[0, 2, 4, 5]$, for example. ### INCORRECT PROGRAM from collections import Counter def main(): n = int(input()) x = list(map(int, input().split())) cnt = Counter(x) max_houses = 0 for pos in range(1, n + 1): if cnt[pos]: max_houses += 1 occupied = [0] * (n + 2) for pos in x: if not occupied[pos - 1]:
2306.09896#102
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
103
29 # Preprint. Under review. occupied[pos - 1] = 1 elif not occupied[pos]: occupied[pos] = 1 elif not occupied[pos + 1]: occupied[pos + 1] = 1 min_houses = sum(occupied) print(min_houses, max_houses) main() ### ERROR MESSAGE ===Input=== 4 1 2 4 4 ===Program Output=== 4 3 ===Expected Output=== 2 4 ### FEEDBACK The code calculates both the minimum and maximum count of houses incorrectly. The correct solution requires conducting a tree search to determine the true maximum and minimum. This can be further optimized by first sorting the list of occupied houses and using a sliding window to process independent segments of the number line. ### COMMENT The feedback suggests conducting a tree search to determine the max and min, as well as a way to optimize this search. These constitute major changes to the program as it is currently written. E.4 CATEGORY (D): FEEDBACK CONTAINS BLOCKS OF PSEUDOCODE OR PYTHON CODE E.4.1 GPT-4: EXAMPLE GPT-4’s feedback never contains blocks of Python per our experiment design, in which the code block is stripped (Section 4.3). We find that GPT-4 never utilizes blocks of pseudocode, either.
2306.09896#103
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
104
E.4.2 HUMAN: EXAMPLE ### SPECIFICATION Given is a string S representing the day of the week today. S is SUN, MON, TUE, WED, THU, FRI, or SAT, for Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, and Saturday, respectively. After how many days is the next Sunday (tomorrow or later)? -----Constraints----- - S is SUN, MON, TUE, WED, THU, FRI, or SAT. -----Input----- Input is given from Standard Input in the following format: S -----Output----- Print the number of days before the next Sunday. -----Sample Input----- SAT -----Sample Output----- 1 It is Saturday today, and tomorrow will be Sunday. ### INCORRECT PROGRAM day = input() days = ['SUN', 'MON', 'TUE', 'WED', 'THU', 'FRI', 'SAT'] index = days.index(day) result = (7 - index) % 7 print(result) ### ERROR MESSAGE ===Input=== SUN ===Program Output=== 0 ===Expected Output=== 7 ### FEEDBACK before returning the result, we should just check to see if result ==0. so we just need to add: if result == 0: result = 7 30 # Preprint. Under review.
2306.09896#104
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
105
30 # Preprint. Under review. ### COMMENT The feedback contains a block of Python code that the model can simply copy in order to fix the program. # E.5 CATEGORY (E): FEEDBACK WHICH EXPRESSES UNCERTAINTY E.5.1 GPT-4: EXAMPLE In the 20 · 2 · 2 = 80 examples we study, we do not find that GPT-4 ever expresses uncertainty. E.5.2 HUMAN: EXAMPLE ### SPECIFICATION Two kingdoms had been at war for a long time, until the emperor intervened to bring an end to the conflict. The territory in question comprises an $M$ by $N$ rectangular grid. At the emperor's insistence, the two kings have withdrawn their troops until no two opposing troops are in adjacent squares of the map (adjacent being horizontal or vertical - diagonal is not considered).
2306.09896#105
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
106
The territory in question comprises an $M$ by $N$ rectangular grid. At the emperor's insistence, the two kings have withdrawn their troops until no two opposing troops are in adjacent squares of the map (adjacent being horizontal or vertical - diagonal is not considered). The emperor proposes to designate certain squares of the map as neutral territory. Neither king will be allowed to move troops into those squares, and the emperor's own forces will patrol them to be sure that both kings observe these rules. The emperor is frugal and does not want to commit more soldiers to this effort than absolutely necessary. His generals have marked each square of the map with the number of soldiers required to secure that square. What remains is to choose which of those squares should be patrolled. Write a program to determine the minimum number of soldiers that the emperor will need to be deploy to guarantee that the troops of one kingdom cannot move, in one or more steps, into squares occupied by the troops of the second kingdom (moving horizontally or vertically) without encountering the emperor's own soldiers. -----Input----- Input begins with a line containing $2$ integers, $w$ and $h$, denoting the width and
2306.09896#106
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
107
without encountering the emperor's own soldiers. -----Input----- Input begins with a line containing $2$ integers, $w$ and $h$, denoting the width and height of the map, where $1 \leq w, h \leq 40$. This is followed by $h$ lines. Each line contains $w$ characters, left justified. These characters will be 'A' or 'B', designating a position held by king A or king B, or a single numeric digit, designating a currently unoccupied position that can be secured by the use of that number of soldiers. For example, a '2' would indicate that two soldiers must be deployed to that square to secure it against passage of other troops. A '0' indicates terrain that is impassible - the emperor need not commit soldiers there because the kingdom troops cannot pass through that square. No 'A' will be adjacent, horizontally or vertically, to any 'B'. There will be at least one 'A' and one 'B' in the input. -----Output----- Print a single line containing an integer denoting the minimum number of soldiers that the emperor must deploy to guarantee that there is no open path between any 'A' position and any 'B' position, using any combination of
2306.09896#107
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
108
minimum number of soldiers that the emperor must deploy to guarantee that there is no open path between any 'A' position and any 'B' position, using any combination of horizontal or vertical moves. -----Examples----- Sample Input: 8 5 A11111AA AA7B111A 111BB111 11BBB111 11BBB11B Sample Output: 13 ### INCORRECT PROGRAM def main(): w, h = map(int, input().split()) grid = [input() for _ in range(h)] def bfs(x, y, z): dist = [[float("inf")] * w for _ in range(h)] dist[x][y] = 0 q = [(x, y)] for x, y in q: for dx, dy in [(-1, 0), (1, 0), (0, -1), (0, 1)]: nx, ny = x + dx, y + dy if 0 <= nx < h and 0 <= ny < w and dist[nx][ny] == float("inf"): if grid[nx][ny] == z: dist[nx][ny] = dist[x][y] q.append((nx, ny)) if "0" <= grid[nx][ny] <= "9": dist[nx][ny] =
2306.09896#108
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
110
Preprint. Under review. if grid[i][j] == "A": ans = min(ans, min(x for row in bfs(i, j, "B") for x in row)) # print(ans) # main() ### ERROR MESSAGE ===Input=== 8 5 A11111AA AA7B111A 111BB111 11BBB111 11BBB11B ===Program Output=== 0 ===Expected Output=== 13 ### FEEDBACK This code appears to be finding the minimum "distance" between an A position and a B position, where distance is measured in the number of soldiers required to secure the intermediate positions. This isn't really addressing the question at all, because even occupying the positions along some path from a particular A -> B won't necessarily block the kings from reaching each other from those positions through a different, longer path. You probably need some sort of min-cut algorithm to divide the graph into two connected components, one each for A and B nodes. ### COMMENT The feedback expresses mild uncertainty in two aspects. First, the user indicates that they are not completely certain in their understanding of the code's current behaviour by using the phrase "appears to be". Then, they express uncertainty in their suggestion for what to do instead, saying that one *probably* needs some sort of min-cut algorithm. 32 Preprint. Under review. # F PROMPTS
2306.09896#110
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
111
32 Preprint. Under review. # F PROMPTS In this appendix, we describe the prompting structure used for our experiments. To avoid repetition, we list the prompts used for the GPT models here; for CodeLlama, we use identical prompts except that we replace “‘python. . . “‘ block delimiters with [PYTHON]. . . [/PYTHON] delimiters, since we find that this significantly improves code extraction reliability. # F.1 APPS For initial code generation (the first sample from MP ), we use different prompts for the two types of tasks in APPS: call-based tasks, in which the desired program should take the input as a parameter to a function and return the output in the function’s return statement; and stdio-based tasks, in which inputs should be read from stdin and outputs should be written to stdout. These prompts are shown in Listing 1 and 2, respectively. The example tasks and programs were taken from APPS’ training set.
2306.09896#111
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
112
For feedback samples (i.e., samples from MF ), we use the prompt in Listing 3. This prompt contains an example in which the user provides the textual specification, the incorrect program and the error message, and the assistant generates feedback. Similarly, for repair samples (i.e., samples from MP which follow MF ) we use the prompt in Listing 4, in which the user also supplies the feedback, and the assistant returns only the fixed version of the program. Finally, for joint feedback-repair samples (i.e., when sampling (f, r) ∼ MP ), we use the prompt in Listing 5. This prompt combines the prompts from Listing 3 and 4 into one prompt, in which the assistant returns both the feedback and the fixed program. In all of these prompts, the specification used was taken from APPS’ training set, while the programs and the feedback were constructed manually. Listing 1: Code generation prompt for call-based tasks. =====system===== You are an expert Python programmer. You will be given a question (problem specification) and will generate a
2306.09896#112
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
113
correct Python program that matches the specification and passes all tests. You will NOT return anything except for the program. Put your fixed program within code delimiters, for example: ```python # YOUR CODE HERE ```. =====user===== ### QUESTION A palindrome is a word, phrase, number, or other sequence of characters which reads the same backward as forward. Examples of numerical palindromes are: `2332, 110011, 54322345` For a given number ```num```, write a function which returns an array of all the numerical palindromes contained within each number. The array should be sorted in ascending order and any duplicates should be removed. In this kata, single digit numbers and numbers which start or end with zeros (such as `010` and `00`) are **NOT** considered valid numerical palindromes. If `num` contains no valid palindromes, return `"No palindromes found"`. Otherwise, return `"Not valid"` if the input is not an integer or is less than `0`. ## Examples ``` palindrome(1221) palindrome(34322122) palindrome(1001331) palindrome(1294) palindrome("1221") ``` => => =>
2306.09896#113
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
114
palindrome(34322122) palindrome(1001331) palindrome(1294) palindrome("1221") ``` => => => => => [22, 1221] [22, 212, 343, 22122] [33, 1001, 1331] "No palindromes found" "Not valid" --- ### Other Kata in this Series: Numerical Palindrome #1 Numerical Palindrome #1.5 Numerical Palindrome #2 Numerical Palindrome #3 Numerical Palindrome #3.5 Numerical Palindrome #4 Numerical Palindrome #5 ### Use Call-Based Format =====assistant===== ```python def is_pal(s): return s == s[::-1]
2306.09896#114
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
115
33 Preprint. Under review. def palindrome(num): if not isinstance(num, int) or num < 0: return 'Not valid' s = str(num) pals = set() for i, ch in enumerate(s): if ch == '0': continue for j in range(i + 2, len(s) + 1): test = s[i:j] if is_pal(test): pals.add(test) return sorted(int(x) for x in pals) or 'No palindromes found' ``` Listing 2: Code generation prompt for stdio-based tasks. =====system===== You are an expert Python programmer. You will be given a question (problem specification) and will generate a
2306.09896#115
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
116
correct Python program that matches the specification and passes all tests. You will NOT return anything except for the program. Put your fixed program within code delimiters, for example: ```python # YOUR CODE HERE ```. =====user===== ### QUESTION Polycarp has $n$ different binary words. A word called binary if it contains only characters '0' and '1'. For example, these words are binary: "0001", "11", "0" and "0011100". Polycarp wants to offer his set of $n$ binary words to play a game "words". In this game, players name words and each next word (starting from the second) must start with the last character of the previous word. The first word can be any. For example, these sequence of words can be named during the game: "0101", "1", "10", "00", "00001". Word reversal is the operation of reversing the order of the characters. For example, the word "0111" after the reversal becomes "1110", the word "11010" after the reversal becomes "01011". Probably, Polycarp has such a set of words that there is no way to put them in the order correspondent to the game rules. In this situation, he wants to reverse some words from his
2306.09896#116
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
117
has such a set of words that there is no way to put them in the order correspondent to the game rules. In this situation, he wants to reverse some words from his set so that: the final set of $n$ words still contains different words (i.e. all words are unique); there is a way to put all words of the final set of words in the order so that the final sequence of $n$ words is consistent with the game rules. Polycarp wants to reverse minimal number of words. Please, help him. -----Input----- The first line of the input contains one integer $t$ ($1 \le t \le 10\^4$) - the number of test cases in the input. Then $t$ test cases follow. The first line of a test case contains one integer $n$ ($1 \le n \le 2\cdot10^5$) - the number of words in the Polycarp's set. Next $n$ lines contain these words. All of $n$ words aren't empty and contains only characters '0' and '1'. The sum of word lengths doesn't exceed $4\cdot10^6$. All words are different. Guaranteed, that the sum of $n$ for all test cases in the input doesn't exceed $2\cdot10^5$. Also,
2306.09896#117
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
118
All words are different. Guaranteed, that the sum of $n$ for all test cases in the input doesn't exceed $2\cdot10^5$. Also, guaranteed that the sum of word lengths for all test cases in the input doesn't exceed $4\cdot10^6$. -----Output----- Print answer for all of $t$ test cases in the order they appear. If there is no answer for the test case, print -1. Otherwise, the first line of the output should contain $k$ ($0 \le k \le n$) - the minimal number of words in the set which should be reversed. The second line of the output should contain $k$ distinct integers - the indexes of the words in the set which should be reversed. Words are numerated from $1$ to $n$ in the order they appear. If $k=0$ you can skip this line (or you can print an empty line). If there are many answers you can print any of them. -----Example----- Input 4 4 0001 1000 0011 0111 3 010 101 0 2 00000 00001 4 01 001 0001 00001 Output
2306.09896#118
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
120
1 3 -1 0 2 1 2 ### Use Standard Input Format (read inputs with `input()`, write results with `print()`) =====assistant===== ```python for _ in range(int(input())): n = int(input()) mass = [] zo = 0 oz = 0 zz = 0 oo = 0 ozs = [] zos = [] ozss = set() zoss = set() for j in range(n): k = input() mass.append(k) if k[0] == '0' and k[-1] == '1': zoss.add(k) zos.append(j + 1) zo += 1 elif k[0] == '1' and k[-1] == '0': ozss.add(k) ozs.append(j + 1) oz += 1 elif k[0] == '0' and k[-1] == '0': zz += 1 else: oo += 1 if zz and oo and not oz and not zo: print(-1) continue else: if zo > oz: print((zo - oz) // 2) ans = [] need = (zo - oz) // 2 i = 0 while need: zzz = mass[zos[i] - 1][len(mass[zos[i] - 1]) - 1:: -1] if zzz not in ozss:
2306.09896#120
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
122
Listing 3: Feedback-only prompt. =====system===== You are a helpful programming assistant and an expert Python programmer. You are helping a user debug a program. The user has written some code, but it has some errors and is not passing the tests. You will help the user by giving a concise (at most 2-3 sentences) textual explanation of what is wrong with the code. You will *not* generate any code, because the user wants to fix the code themselves. =====user===== ### QUESTION A palindrome is a word, phrase, number, or other sequence of characters which reads the same backward as forward. Examples of numerical palindromes are: `2332, 110011, 54322345` For a given number ```num```, write a function which returns an array of all the numerical palindromes contained within each number. The array should be sorted in ascending order and any duplicates should be removed. In this kata, single digit numbers and numbers which start or end with zeros (such as `010` and `00`) are **NOT** considered valid numerical palindromes. If `num` contains no valid palindromes, return `"No palindromes found"`. Otherwise, return `"Not valid"` if the input is not an integer or is less than `0`. # ## Examples 35 # Preprint. Under review.
2306.09896#122
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
123
``` palindrome(1221) palindrome(34322122) palindrome(1001331) palindrome(1294) palindrome("1221") ``` => => => => => [22, 1221] [22, 212, 343, 22122] [33, 1001, 1331] "No palindromes found" "Not valid" --- ### Other Kata in this Series: Numerical Palindrome #1 Numerical Palindrome #1.5 Numerical Palindrome #2 Numerical Palindrome #3 Numerical Palindrome #3.5 Numerical Palindrome #4 Numerical Palindrome #5 ### Use Call-Based Format ### INCORRECT PYTHON CODE ```python def is_pal(s): return s == s[::-1] def palindrome(num): if not isinstance(num, int) or num < 0: return 'Not valid' s = str(num) pals = set() for i, ch in enumerate(s): for j in range(i + 2, len(s) + 1): test = s[i:j] if is_pal(test): pals.add(test) return sorted(int(x) for x in pals) or 'No palindromes found'
2306.09896#123
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
125
``` The code above is wrong and contains a bug. Given input "1001331" the output was "['0', '33', '1001', =====assistant===== The following is a concise explanation of the issue: The problem description states that numbers which start or end with zeros (such as `010` and `00`) are NOT considered valid numerical palindromes. However, the code above does not take this into account and therefore returns `00` as a valid palindrome. This can be fixed by checking if the first or last character is `0` before adding the string to the set of palindromes. Listing 4: Repair-only prompt.
2306.09896#125
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
126
=====system===== You are a helpful programming assistant and an expert Python programmer. You are helping a user write a program to solve a problem. The user has written some code, but it has some errors and is not passing the tests. The user has spent some time debugging the program and will provide you with a concise textual explanation of what is wrong with the code. You will use this explanation to generate a fixed version of the program. Put your fixed program within code delimiters, for example: ```python # YOUR CODE HERE ```. =====user===== ### QUESTION A palindrome is a word, phrase, number, or other sequence of characters which reads the same backward as forward. Examples of numerical palindromes are: `2332, 110011, 54322345` For a given number ```num```, write a function which returns an array of all the numerical palindromes contained within each number. The array should be sorted in ascending order and any duplicates should be removed. In this kata, single digit numbers and numbers which start or end with zeros (such as `010` and `00`) are **NOT** considered valid numerical palindromes. If `num` contains no valid
2306.09896#126
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
127
start or end with zeros (such as `010` and `00`) are **NOT** considered valid numerical palindromes. If `num` contains no valid palindromes, return `"No palindromes found"`. Otherwise, return `"Not valid"` if the input is not an integer or is less than `0`. ## Examples ``` palindrome(1221) palindrome(34322122) palindrome(1001331) palindrome(1294) palindrome("1221") ``` => => => => => [22, 1221] [22, 212, 343, 22122] [33, 1001, 1331] "No palindromes found" "Not valid" --- ### Other Kata in this Series: Numerical Palindrome #1 Numerical Palindrome #1.5 Numerical Palindrome #2 Numerical Palindrome #3
2306.09896#127
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
129
Numerical Palindrome #3.5 Numerical Palindrome #4 Numerical Palindrome #5 ### Use Call-Based Format ### INCORRECT PYTHON CODE ```python def is_pal(s): return s == s[::-1] def palindrome(num): if not isinstance(num, int) or num < 0: return 'Not valid' s = str(num) pals = set() for i, ch in enumerate(s): for j in range(i + 2, len(s) + 1): test = s[i:j] if is_pal(test): pals.add(test) return sorted(int(x) for x in pals) or 'No palindromes found' ``` The code above is wrong and contains a bug. Given input "1001331" the output was "['0', '33', '1001', '1331']" but the expected output was "['33', '1001', '1331']". The following is a concise explanation of the issue: The problem description states that numbers which start or end with zeros (such as `010` and `00`) are NOT considered valid numerical palindromes. However, the code above does not take this into account and therefore returns `00` as a valid
2306.09896#129
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
130
and `00`) are NOT considered valid numerical palindromes. However, the code above does not take this into account and therefore returns `00` as a valid palindrome. This can be fixed by checking if the first or last character is `0` before adding the string to the set of palindromes. =====assistant===== ### FIXED PYTHON CODE ```python def is_pal(s): return s == s[::-1] def palindrome(num): if not isinstance(num, int) or num < 0: return 'Not valid' s = str(num) pals = set() for i, ch in enumerate(s): if ch == '0': continue for j in range(i + 2, len(s) + 1): test = s[i:j] if is_pal(test): pals.add(test) return sorted(int(x) for x in pals) or 'No palindromes found'
2306.09896#130
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
132
=====system===== You are a helpful programming assistant and an expert Python programmer. You are helping a user write a program to solve a problem. The user has written some code, but it has some errors and is not passing the tests. You will help the user by first giving a concise (at most 2-3 sentences) textual explanation of what is wrong with the code. After you have pointed out what is wrong with the code, you will then generate a fixed version of the program. Put your fixed program within code delimiters, for example: ```python # YOUR CODE HERE ```. =====user===== ### QUESTION A palindrome is a word, phrase, number, or other sequence of characters which reads the same backward as forward. Examples of numerical palindromes are: `2332, 110011, 54322345` For a given number ```num```, write a function which returns an array of all the numerical palindromes contained within each number. The array should be sorted in ascending order and any duplicates should be removed. In this kata, single digit numbers and numbers which start or end with zeros (such as `010` and `00`) are **NOT** considered valid numerical palindromes. If
2306.09896#132
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
133
digit numbers and numbers which start or end with zeros (such as `010` and `00`) are **NOT** considered valid numerical palindromes. If `num` contains no valid palindromes, return `"No palindromes found"`. Otherwise, return `"Not valid"` if the input is not an integer or is less than `0`. ## Examples ``` palindrome(1221) palindrome(34322122) palindrome(1001331) palindrome(1294) palindrome("1221") ``` => => => => => [22, 1221] [22, 212, 343, 22122] [33, 1001, 1331] "No palindromes found" "Not valid" --- ### Other Kata in this Series: Numerical Palindrome #1
2306.09896#133
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
135
Numerical Palindrome #1.5 Numerical Palindrome #2 Numerical Palindrome #3 Numerical Palindrome #3.5 Numerical Palindrome #4 Numerical Palindrome #5 ### Use Call-Based Format ### INCORRECT PYTHON CODE ```python def is_pal(s): return s == s[::-1] def palindrome(num): if not isinstance(num, int) or num < 0: return 'Not valid' s = str(num) pals = set() for i, ch in enumerate(s): for j in range(i + 2, len(s) + 1): test = s[i:j] if is_pal(test): pals.add(test) return sorted(int(x) for x in pals) or 'No palindromes found' ``` The code above is wrong and contains a bug. Given input "1001331" the output was "['0', '33', '1001', '1331']" but the expected output was "['33', '1001', '1331']". =====assistant===== The following is a concise explanation of the issue: The problem description states that numbers which start or end with zeros (such as `010` and `00`) are NOT
2306.09896#135
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
136
following is a concise explanation of the issue: The problem description states that numbers which start or end with zeros (such as `010` and `00`) are NOT considered valid numerical palindromes. However, the code above does not take this into account and therefore returns `00` as a valid palindrome. This can be fixed by checking if the first or last character is `0` before adding the string to the set of palindromes. # The code below is the correct version of the code above, where the issue has been fixed: ### FIXED PYTHON CODE ```python def is_pal(s): return s == s[::-1] def palindrome(num): if not isinstance(num, int) or num < 0: return 'Not valid' s = str(num) pals = set() for i, ch in enumerate(s): if ch == '0': continue for j in range(i + 2, len(s) + 1): test = s[i:j] if is_pal(test): pals.add(test)
2306.09896#136
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
137
return sorted(int(x) for x in pals) or 'No palindromes found' ``` F.2 HUMANEVAL For HumanEval, since there are no training samples to use for in-context learning, we generate the initial programs in a zero-shot manner. For feedback, repair, and joint feedback-repair samples we use the prompts in Listings 6-8. These all include a one-shot example specification, program, error message and explanation; these were manually crafted based on the specification and gold standard program for HumanEval/0. Although not strictly necessary, we find that using an in-context example when doing self-repair encouraged the model to use a format in which we could easily separate (and analyze) the feedback and the repaired program, rather than interleaving the two. Note that the form of the error message differs slightly from APPS, since we use a different execution engine for HumanEval. # Listing 6: Feedback-only prompt. =====system===== You are a helpful programming assistant. You are helping a user write a program to solve a problem. The user has written some code, but it has some errors and is not passing the tests. You will help the user by giving a detailed but concise textual explanation of what is wrong with the code. You will *not* generate any code, because the user wants to fix the code themselves.
2306.09896#137
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
138
# =====user===== ### INCORRECT CODE ```python def approx_unique_elements(numbers: List[float], threshold: float) -> bool: """ Check if in given list of numbers, there are no two numbers closer to each other than the threshold given. >>> approx_unique_elements([1.0, 2.0, 3.0], 0.5) True >>> approx_unique_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) False """ 38 Preprint. Under review. for elem in numbers: for elem2 in numbers: distance = abs(elem - elem2) if distance < threshold: return False return True ``` The code does not pass the test cases. The error encountered was: `failed: assert approx_unique_elements([1.0, 2.0, 3.0], 0.5) == True: False != True` =====assistant===== The following is a concise explanation of why the code failed the test: The code fails because it also compares elements to themselves. Thus, for any input, the answer will be False. This can be fixed by adding a condition to the inner loop that checks that the indices are not the same. Listing 7: Repair-only prompt.
2306.09896#138
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
140
program to solve a problem. The user has written some code, but it has some errors and is not passing the tests. The user has spent some time debugging the program and will provide you with a concise textual explanation of what is wrong with the code. You will use this explanation to generate a fixed version of the program. Put your fixed program within code delimiters, for example: ```python # YOUR CODE HERE ```. =====user===== ### INCORRECT CODE ```python def approx_unique_elements(numbers: List[float], threshold: float) -> bool: """ Check if in given list of numbers, there are no two numbers closer to each other than the threshold given. >>> approx_unique_elements([1.0, 2.0, 3.0], 0.5) True >>> approx_unique_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) False """ for elem in numbers: for elem2 in numbers: distance = abs(elem - elem2) if distance < threshold: return False return True ``` The code does not pass the test cases. The error encountered was: `failed: assert
2306.09896#140
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
141
- elem2) if distance < threshold: return False return True ``` The code does not pass the test cases. The error encountered was: `failed: assert approx_unique_elements([1.0, 2.0, 3.0], 0.5) == True: False != True` The following is a concise explanation of why the code failed the test: The code fails because it also compares elements to themselves. Thus, for any input, the answer will be False. This can be fixed by adding a condition to the inner loop that checks that the indices are not the same. =====assistant===== ### FIXED CODE ```python def approx_unique_elements(numbers: List[float], threshold: float) -> bool: """ Check if in given list of numbers, there are no two numbers closer to each other than the threshold given. >>> approx_unique_elements([1.0, 2.0, 3.0], 0.5) True >>> approx_unique_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) False """ for idx, elem in enumerate(numbers): for idx2, elem2 in enumerate(numbers): if
2306.09896#141
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
143
``` Listing 8: Joint feedback-repair prompt. =====system===== You are a helpful programming assistant and an expert Python programmer. You are helping a user write a program to solve a problem. The user has written some code, but it has some errors and is not passing the tests. You will help the user by first giving a concise (at most 2-3 sentences) textual explanation of what is wrong with the code. After you have pointed out what is wrong with the code, you will then generate a fixed version of the program. Put your fixed program within code delimiters, for example: ```python # YOUR CODE HERE ```. =====user===== ### INCORRECT CODE ```python def approx_unique_elements(numbers: List[float], threshold: float) -> bool: """ Check if in given list of numbers, there are no two numbers closer to each other than the threshold given. >>> approx_unique_elements([1.0, 2.0, 3.0], 0.5) True 39 # Preprint. Under review.
2306.09896#143
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
144
>>> approx_unique_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) False """ for elem in numbers: for elem2 in numbers: distance = abs(elem - elem2) if distance < threshold: return False return True ``` The code does not pass the test cases. The error encountered was: `failed: assert approx_unique_elements([1.0, 2.0, 3.0], 0.5) == True: False != True` =====assistant===== The following is a concise explanation of why the code failed the test: The code fails because it also compares elements to themselves. Thus, for any input, the answer will be False. This can be fixed by adding a condition to the inner loop that checks that the indices are not the same. ### FIXED CODE ```python def approx_unique_elements(numbers: List[float], threshold: float) -> bool: """ Check if in given list of numbers, there are no two numbers closer to each other than the threshold given. >>> approx_unique_elements([1.0, 2.0, 3.0], 0.5) True >>>
2306.09896#144
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
147
Difficulty Tasks introductory ’4004’, ’4058’, ’4063’, ’4065’, ’4100’, ’4108’, ’4117’, ’4155’, ’4164’, ’4182’, ’4193’, ’4195’, ’4211’, ’4217’, ’4241’, ’4249’, ’4270’, ’4275’, ’4281’, ’4293’, ’4333’, ’4347’, ’4350’, ’4356’, ’4409’, ’4426’, ’4431’, ’4450’, ’4465’, ’4484’, ’4498’, ’4505’, ’4507’, ’4514’, ’4544’, ’4553’, ’4586’, ’4610’, ’4662’, ’4663’, ’4667’,
2306.09896#147
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
148
’4586’, ’4610’, ’4662’, ’4663’, ’4667’, ’4677’, ’4681’, ’4704’, ’4716’, ’4741’, ’4750’, ’4786’, ’4787’, ’4801’, ’4855’, ’4862’, ’4864’, ’4870’, ’4873’, ’4890’, ’4897’, ’4952’, ’4966’, ’4984’ interview ’0004’, ’0013’, ’0033’, ’0056’, ’0073’, ’0074’, ’0089’, ’0091’, ’0124’, ’0131’, ’0139’, ’0162’, ’0166’, ’0183’, ’0186’, ’0191’, ’0199’, ’0205’,
2306.09896#148
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
149
’0183’, ’0186’, ’0191’, ’0199’, ’0205’, ’0249’, ’0253’, ’0268’, ’0274’, ’0300’, ’0304’, ’0341’, ’0342’, ’0413’, ’0427’, ’0434’, ’0466’, ’0467’, ’0496’, ’0501’, ’0511’, ’0537’, ’0564’, ’0571’, ’0575’, ’0579’, ’0592’, ’0597’, ’0626’, ’0637’, ’0676’, ’0704’, ’0728’, ’0757’, ’0765’, ’0788’, ’0794’, ’0804’, ’0805’, ’0811’, ’0829’, ’0879’,
2306.09896#149
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
150
’0804’, ’0805’, ’0811’, ’0829’, ’0879’, ’0904’, ’0915’, ’0925’, ’0937’, ’0948’, ’0954’, ’0955’, ’0972’, ’0985’, ’0989’, ’1018’, ’1019’, ’1033’, ’1046’, ’1076’, ’1133’, ’1140’, ’1141’, ’1145’, ’1146’, ’1149’, ’1168’, ’1185’, ’1221’, ’1232’, ’1256’, ’1257’, ’1280’, ’1285’, ’1299’, ’1317’, ’1347’, ’1380’, ’1392’, ’1393’, ’1418’, ’1444’,
2306.09896#150
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
151
’1380’, ’1392’, ’1393’, ’1418’, ’1444’, ’1448’, ’1458’, ’1489’, ’1517’, ’1533’, ’1573’, ’1635’, ’1653’, ’1668’, ’1672’, ’1721’, ’1736’, ’1748’, ’1756’, ’1759’, ’1775’, ’1777’, ’1825’, ’1850’, ’1863’, ’1865’, ’1870’, ’1875’, ’1906’, ’1917’, ’1956’, ’1962’, ’1967’, ’1976’, ’2024’, ’2049’, ’2062’, ’2092’, ’2093’, ’2097’, ’2106’, ’2172’, ’2176’,
2306.09896#151
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
152
’2093’, ’2097’, ’2106’, ’2172’, ’2176’, ’2203’, ’2231’, ’2246’, ’2264’, ’2266’, ’2295’, ’2326’, ’2328’, ’2332’, ’2342’, ’2361’, ’2369’, ’2407’, ’2408’, ’2418’, ’2455’, ’2463’, ’2511’, ’2515’, ’2516’, ’2535’, ’2585’, ’2623’, ’2629’, ’2642’, ’2651’, ’2662’, ’2668’, ’2673’, ’2698’, ’2701’, ’2709’, ’2735’, ’2742’, ’2752’, ’2759’, ’2765’,
2306.09896#152
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
153
’2735’, ’2742’, ’2752’, ’2759’, ’2765’, ’2787’, ’2802’, ’2832’, ’2835’, ’2844’, ’2858’, ’2885’, ’2897’, ’2923’, ’2932’, ’2945’, ’2973’, ’2980’ competition ’3017’, ’3019’, ’3054’, ’3062’, ’3063’, ’3066’, ’3070’, ’3077’, ’3083’, ’3097’, ’3117’, ’3135’, ’3161’, ’3186’, ’3209’, ’3220’, ’3286’, ’3287’, ’3323’, ’3335’, ’3353’, ’3355’, ’3371’, ’3375’,
2306.09896#153
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
154
’3335’, ’3353’, ’3355’, ’3371’, ’3375’, ’3376’, ’3388’, ’3404’, ’3411’, ’3433’, ’3441’, ’3445’, ’3470’, ’3481’, ’3484’, ’3548’, ’3557’, ’3605’, ’3609’, ’3634’, ’3635’, ’3671’, ’3679’, ’3709’, ’3754’, ’3769’, ’3792’, ’3798’, ’3799’, ’3804’, ’3810’, ’3819’, ’3823’, ’3836’, ’3843’, ’3849’, ’3876’, ’3913’, ’3934’, ’3972’, ’3974’
2306.09896#154
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
155
41 Preprint. Under review. H MORE EXAMPLES OF FEEDBACK FROM GPT-4 AND OUR HUMAN PARTICIPANTS In this appendix, we give five more examples from the study in Section 4.3, so that the keen reader may themself compare and contrast the feedback generated by GPT-4 and by our human participants. Each listing (6-10) contains a textual specification, an incorrect program, an error message, both examples of GPT-4 feedback which were shown to the participants assigned to this task, and the responses from the participants. For each piece of feedback, we also list the number of resulting repaired programs which passed all the tests (c.f. the full results table in Appendix D). # Listing 9: Task 2106, Program A. ### SPECIFICATION There are n cities in the country where the Old Peykan lives. These cities are located on a straight line, we'll denote them from left to right as c_1, c_2, ..., c_{n}. The Old Peykan wants to travel from city c_1 to c_{n} using roads. There are (n - 1) one way roads, the i-th road goes from city c_{i} to city c_{i} + 1 and is d_{i} kilometers long.
2306.09896#155
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
156
The Old Peykan travels 1 kilometer in 1 hour and consumes 1 liter of fuel during this time. Each city c_{i} (except for the last city c_{n}) has a supply of s_{i} liters of fuel which immediately transfers to the Old Peykan if it passes the city or stays in it. This supply refreshes instantly k hours after it transfers. The Old Peykan can stay in a city for a while and fill its fuel tank many times. Initially (at time zero) the Old Peykan is at city c_1 and s_1 liters of fuel is transferred to it's empty tank from c_1's supply. The Old Peykan's fuel tank capacity is unlimited. Old Peykan can not continue its travel if its tank is emptied strictly between two cities. Find the minimum time the Old Peykan needs to reach city c_{n}. -----Input----- The first line of the input contains two space-separated integers m and k (1 <= m, k <= 1000). The value m specifies the number of roads between cities which is equal to n - 1. The next line contains m space-separated integers d_1, d_2, ..., d_{m} (1 <= d_{i} <= 1000) and
2306.09896#156
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
157
The next line contains m space-separated integers d_1, d_2, ..., d_{m} (1 <= d_{i} <= 1000) and the following line contains m space-separated integers s_1, s_2, ..., s_{m} (1 <= s_{i} <= 1000). -----Output----- In the only line of the output print a single integer - the minimum time required for The Old Peykan to reach city c_{n} from city c_1. -----Examples----- Input 4 6 1 2 5 2 2 3 3 4 Output 10 Input 2 3 5 6 5 5 Output 14 ### INCORRECT PROGRAM def main(): m, k = map(int, input().split()) ds = list(map(int, input().split())) ss = list(map(int, input().split())) time = 0 fuel = ss[0] min_time = 0 max_s = ss[0] for i in range(m): fuel -= ds[i] max_s = max(max_s, ss[i]) while fuel < 0: time += k fuel += max_s time += ds[i] print(time) main()
2306.09896#157
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
159
### ERROR MESSAGE ===Input=== 4 6 1 2 5 2 2 3 3 4 ===Program Output=== 28 ===Expected Output=== 10 ### GPT-4 FEEDBACK #1 The issue with the code is that it does not consider the time spent in the current city before moving to the next city, resulting in an overestimation of the minimum time required. This can be fixed by updating the time spent in the current city before calculating the fuel used and accounting for the time it takes to travel to the next city. # Number of passing repairs: 7/25 ### GPT-4 FEEDBACK #2 The following is a concise explanation of the issue: The code incorrectly calculates the waiting time at each city. It should minimize the waiting time by considering the maximum fuel supply that can be collected during the waiting time. The code needs to be updated to account for this. # Number of passing repairs: 10/25 ### PARTICIPANT FEEDBACK #1 The program doesn't add the fuel for passing through cities, only if it needs to actually stay in the city to refuel. Whenever it passes through a city, it should add ss[i] to the fuel. # Number of passing repairs: 10/25 ### PARTICIPANT FEEDBACK #2 The code is
2306.09896#159
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
160
through a city, it should add ss[i] to the fuel. # Number of passing repairs: 10/25 ### PARTICIPANT FEEDBACK #2 The code is incorrect because it doesn't take into account the fuel at each city that is immediately transferred after entering the city (aside for the fuel received at the first city). As a result, this program overestimates the time needed, because it assumes that the Old Peykan always has to wait for a refill (besides the fuel they are given in the first city). # Number of passing repairs: 0/25
2306.09896#160
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
162
### SPECIFICATION In the free-market, ruthlessly capitalist world of train fares, only one thing matters: incentives. Train companies are incentivised with bonuses for high throughput, successful journeys, and customer satisfaction. Conversely, the companies are disincentivised from failure via mandatory refunds for customers delayed by $30$ minutes or more. Being a ruthless capitalist yourself, you have decided to take advantage of this generous delay compensation provision. The refund is awarded provided that no matter the combination of trains you had taken (provided they followed the same route of stations as planned), you would still be unable to reach your destination in strictly less time than $30$ minutes (or $1800$ seconds), of the time you would have arrived assuming your booked journey was exactly on time. Armed with your printout of the day's delays, and the original timetable, you must ask yourself only one question: what is the earliest time you can book a train for from station $1$, in order to earn this restitutive reward? -----Input----- - One line containing two integers: $N$ ($1 \le N \le 100$), the number of stations, and $M$ ($1 \le M
2306.09896#162
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
163
-----Input----- - One line containing two integers: $N$ ($1 \le N \le 100$), the number of stations, and $M$ ($1 \le M \le 10^5$), the number of scheduled trains. - The next $M$ lines each contain 4 integers: - $X$, the starting station ($1 \le X \le N-1$), - $S$ and $T$ ($0 \le S \le T < 86400$), the planned departure and arrival times in seconds, - and $L$ ($0 \le L < 86400$), the duration by which the train's departure and arrival times are delayed. Stations are numbered from $1$ to $N$ in the order you will visit them. Each train goes between stations $X$ and $X+1$. It is possible to change between trains instantanesouly. -----Output----- - One line containing one integer: the start time of the earliest train journey you could book in order to earn your compensation, or impossible if no such journey is possible. -----Examples----- Sample Input 1: 2 3 1 1800 9000 1800 1 2000 9200 1600 1 2200 9400 1400 Sample Output 1: 1800 Sample Input 2: 2 2 1 1800 3600 1800 1 1900 3600 1600 Sample Output
2306.09896#163
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
166
### INCORRECT PROGRAM def main(): N, M = map(int, input().split()) train_data = [tuple(map(int, input().split())) for _ in range(M)] train_data.sort(key=lambda x: (x[0], x[1])) times = [1e9] * (N + 1) times[1] = 0 for X, S, T, L in train_data: if times[X] <= S: times[X + 1] = min(times[X + 1], T + L) if times[N] >= 1800: print(int(times[1])) else: print("impossible") main() ### ERROR MESSAGE ===Input=== 2 3 1 1800 9000 1800 1 2000 9200 1600 1 2200 9400 1400 ===Program Output=== 0 ===Expected Output=== 1800 ### GPT-4 FEEDBACK #1 The issue with the code is that it prints `times[1]` instead of the earliest possible train start time. The code needs to get the minimum start time from the train_data list that has a valid delay for compensation. # Number of passing repairs: 3/25 ### GPT-4 FEEDBACK #2 The issue with the code is that it directly prints the value of `times[1]`, which is initialized to
2306.09896#166
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
167
repairs: 3/25 ### GPT-4 FEEDBACK #2 The issue with the code is that it directly prints the value of `times[1]`, which is initialized to `0` and never updated. The correct output should be the starting time of the earliest train journey that satisfies the condition. To fix this, we need to track the earliest possible start time and update it if a valid train journey is found. # Number of passing repairs: 0/25 ### PARTICIPANT FEEDBACK #1 This code is clearly wrong as times[1] is initialized to 0 then never updated and it is returned as the answer. Also, in the sorting line `int(x[0])` should be used in place of `x[0]` and likewise for `int(x[1])` in place of `x1` because we want numerical order sorting not string order sorting. The goal in this problem is calculate for each station what the earliest-arriving undelayed path to the station is and what the earliest-arriving delayed path to the station is, take the difference between the two, and see if it results in a total delay of >= 30 minutes (then to return the earliest departure from station 1 for this path. Given this, first the current code based around
2306.09896#167
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
168
see if it results in a total delay of >= 30 minutes (then to return the earliest departure from station 1 for this path. Given this, first the current code based around `times` can remain - this is the delayed time. However, the if-statement within the for-loop should be `<= S + L` since the departure time is also delayed. Then `undelayed_times` should be tracked as well, updated in the foor loop when T+L). Finally the if-statement for time printing can use the difference between the delayed and undelayed times and see if any() are >= 1800. Finally we need to print the earliest arriving train for that path `undelayed_times[X] <= S` and set to a min() between the current undelayed time an `T` (not # Number of passing repairs: 5/25 ### PARTICIPANT FEEDBACK #2 One of the main issues is that the code prints 'times[1]', which is set to 0 and never updated. However, there are also some larger issues with the code. First, instead of checking if you reach your destination within 30 minutes of the expected time, it checks if you reach your destination in 30 minutes. To fix this, the program would need to keep track of the time you'd
2306.09896#168
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
169
destination within 30 minutes of the expected time, it checks if you reach your destination in 30 minutes. To fix this, the program would need to keep track of the time you'd reach your destination if there were not delays, which it currently doesn't. Second, instead of printing times[1], it should be keeping track of the start time of the train that leads to a delay and printing this. # Number of passing repairs: 0/25
2306.09896#169
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
174
-----Sample Input----- 0 0 0 1 -----Sample Output----- -1 1 -1 0 (0,0),(0,1),(-1,1),(-1,0) is the four vertices of a square in counter-clockwise order. Note that (x_3,y_3)=(1,1),(x_4,y_4)=(1,0) is not accepted, as the vertices are in clockwise order. ### INCORRECT PROGRAM x1, y1, x2, y2 = map(int, input().split()) x3 = x2 + (y2 - y1) y3 = y2 - (x2 - x1) x4 = x1 + (y2 - y1) y4 = y1 - (x2 - x1) print(x3, y3, x4, y4) ### ERROR MESSAGE ===Input=== 0 0 0 1 ===Program Output=== 1 1 1 0 ===Expected Output=== -1 1 -1 0 ### GPT-4 FEEDBACK #1 The issue with the code is that it doesn't consider the counter-clockwise order of the vertices. To fix this, we need to swap the signs of (y2 - y1) and (x2 - x1) in the calculations. # Number of passing repairs: 25/25 ###
2306.09896#174
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
175
to swap the signs of (y2 - y1) and (x2 - x1) in the calculations. # Number of passing repairs: 25/25 ### GPT-4 FEEDBACK #2 The following is a concise explanation of the issue: The code above does not ensure that the vertices are in counter-clockwise order. It calculates x3, y3, x4, and y4 in the wrong order, which results in the coordinates being in clockwise order instead. To fix this, we need to swap the calculations for x3 and x4, and y3 and y4. # Number of passing repairs: 0/25 ### PARTICIPANT FEEDBACK #1 The program calculates vertices for a clockwise-oriented square, not counter-clockwise. To fix this, (y2 - y1) and (x2 - x1) should be negated in the equations. # Number of passing repairs: 25/25 ### PARTICIPANT FEEDBACK #2 The calculation for y3 and y4 incorrectly subtract the x direction vector to y0 and y1 respectively. In each of these cases, the calculation can be fixed by changing the first subtraction to addition.
2306.09896#175
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
177
### SPECIFICATION One day, $n$ people ($n$ is an even number) met on a plaza and made two round dances, each round dance consists of exactly $ rac{n}{2}$ people. Your task is to find the number of ways $n$ people can make two round dances if each round dance consists of exactly $ rac{n}{2}$ people. Each person should belong to exactly one of these two round dances. Round dance is a dance circle consisting of $1$ or more people. Two round dances are indistinguishable (equal) if one can be transformed to another by choosing the first participant. For example, round dances $[1, 3, 4, 2]$, $[4, 2, 1, 3]$ and $[2, 1, 3, 4]$ are indistinguishable. For example, if $n=2$ then the number of ways is $1$: one round dance consists of the first person and the second one of the second person. For example, if $n=4$ then the number of ways is $3$. Possible options: one round dance - $[1,2]$, another - $[3,4]$; one round dance - $[2,4]$, another -
2306.09896#177
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
178
Possible options: one round dance - $[1,2]$, another - $[3,4]$; one round dance - $[2,4]$, another - $[3,1]$; one round dance - $[4,1]$, another - $[3,2]$. Your task is to find the number of ways $n$ people can make two round dances if each round dance consists of exactly $ rac{n}{2}$ people. -----Input----- The input contains one integer $n$ ($2 \le n \le 20$), $n$ is an even number. -----Output----- Print one integer - the number of ways to make two round dances. It is guaranteed that the answer fits in the $64$-bit integer data type. -----Examples----- Input 2 Output 1
2306.09896#178
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
180
Input 4 Output 3 Input 8 Output 1260 Input 20 Output 12164510040883200 ### INCORRECT PROGRAM def factorial(n): if n == 0: return 1 else: return n * factorial(n-1) n = int(input()) ans = (factorial(n) // (n // 2)**2) // (2 * n) print(ans) ### ERROR MESSAGE ===Input=== 2 ===Program Output=== 0 ===Expected Output=== 1 ### GPT-4 FEEDBACK #1 The issue with the code is that it uses integer division when calculating the final answer, which may cause the result to be rounded down incorrectly. To fix this, we can use float division and then convert the result to an integer. # Number of passing repairs: 0/25 ### GPT-4 FEEDBACK #2 The following is a concise explanation of the issue: The code attempts to calculate the number of ways using integer division, which may result in a loss of precision and incorrect results. To fix the issue, change integer division to regular division, then round the result to the nearest integer using `round()`. # Number of passing repairs: 0/25 ### PARTICIPANT FEEDBACK #1 The problem is that the formula used is incorrect. For n <= 4, n!
2306.09896#180
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
181
Number of passing repairs: 0/25 ### PARTICIPANT FEEDBACK #1 The problem is that the formula used is incorrect. For n <= 4, n! is less than n^3 / 2 and so this is why we see a 0. There is an erroneous additional division by n which is the true reason for the wrong answer. # Number of passing repairs: 25/25 ### PARTICIPANT FEEDBACK #2 On line 11, the program calculates a term with a correct numerator, but an incorrect denominator. It divides by (2*n), when it should just divide by 2. # Number of passing repairs: 25/25
2306.09896#181
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
182
# Listing 13: Task 4855, Program A. ### SPECIFICATION The impact factor of a scientific journal is a measure reflecting the average number of citations to articles published in science journals. For this task we are using a simplified formula for calculating the impact factor:\[ rac{ ext {Total count of citations received by articles published in the journal}}{ ext {Total number of articles published}} \] Rounding is always performed upwards. For example the impact factor of the "Journal for ore research and time wasting" that published $38$ articles cited $894$ times is $894 / 38 = 23.53$ rounding up to $24$. You are the editor of one scientific journal. You know how many articles you are going to publish and the owners are pushing you to reach a specific impact factor. You are wondering how many scientists you will have to bribe to cite your articles to meet the owners demands. Since money is tight you want to bribe the minimal amount of scientists. Each bribed scientist buys you a single citation. -----Input----- First and only line of input will contain $2$ integers, $A$ ($1 \leq A \leq 100$), the number of articles you plan to publish and $I$ ($1 \leq I \leq 100$), the impact factor the owners require. -----Output----- The first and only line of output should contain one integer, the minimal number of scientists you need to
2306.09896#182
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
184
38 24 Sample Output 1: 875 Sample Input 2: 1 100 Sample Output 2: 100 ### INCORRECT PROGRAM import math A, I = map(int, input().split()) needed_citations = math.ceil(A * (I - 0.5)) scientists_to_bribe = needed_citations - A * (I - 1) print(scientists_to_bribe) ### ERROR MESSAGE ===Input=== 38 24 ===Program Output=== 19 ===Expected Output=== 875 ### GPT-4 FEEDBACK #1 The following is a concise explanation of the issue: The calculation for the number of scientists to bribe is incorrect. It should be calculated by first finding the minimum number of citations needed: `A * I`, and then subtracting the current number of citations: `A * (I - 1)`. # Number of passing repairs: 0/25 ### GPT-4 FEEDBACK #2 The following is a concise explanation of the issue: The code calculates the required number of scientists to bribe by subtracting the current number of citations from the number of needed citations. However, it incorrectly calculates the current number of citations. # Number of passing repairs: 1/25 ### PARTICIPANT FEEDBACK #1 The
2306.09896#184
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09896
186
### PARTICIPANT FEEDBACK #2 The program seems to have a conceptual misunderstanding. Specifically, the number of scientists needed to bribe is equal to the number of required citations (`citations_needed`). However, the calculation of `citation_needed` is also incorrect -- it implies rounding impact factor conventionally (e.g. 3.49 goes to 3, 3.51 goes to 4), whereas rounding here is a ceiling function on the impact factor. As such, the actual formula for `citation_needed` should be `A*(I-1) + 1` # # Number # of # passing # repairs: 25/25 47
2306.09896#186
Is Self-Repair a Silver Bullet for Code Generation?
Large language models have shown remarkable aptitude in code generation, but still struggle on challenging tasks. Self-repair -- in which the model debugs and fixes mistakes in its own code -- has recently become a popular way to boost performance in these settings. However, only very limited studies on how and when self-repair works effectively exist in the literature, and one might wonder to what extent a model is really capable of repairing mistakes in code which was originally generated by that very same model. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval or APPS, finding that when the cost of carrying out repair is taken into account, gains are often modest, vary significantly between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; boosting the feedback with stronger models, we observe performance gains even in settings where the model does not benefit from self-repair. Finally, we find that providing the model with feedback from human participants greatly benefits repair even for GPT-4, and carry out a brief qualitative analysis of the differences observed.
http://arxiv.org/pdf/2306.09896
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
cs.CL, cs.AI, cs.PL, cs.SE
Added experiments for HumanEval (dataset) and Code Llama (model)
null
cs.CL
20230616
20231017
[ { "id": "2211.16490" }, { "id": "2302.13971" }, { "id": "2308.12950" }, { "id": "2305.04087" }, { "id": "2204.02311" }, { "id": "2107.03374" }, { "id": "2305.06161" }, { "id": "2308.03188" }, { "id": "2108.07732" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2302.12813" }, { "id": "2303.08774" } ]
2306.09093
0
3 2 0 2 n u J 5 1 ] L C . s c [ 1 v 3 9 0 9 0 . 6 0 3 2 : v i X r a Preprint (work in progress) MACAW-LLM: MULTI-MODAL LANGUAGE MODELING WITH IMAGE, AUDIO, VIDEO, AND TEXT INTEGRATION Chenyang Lyu1,2, Minghao Wu3, Longyue Wang1∗ Bingshuai Liu1, Zefeng Du1, Shuming Shi1 & Zhaopeng Tu1 1Tencent AI Lab [email protected], [email protected], {timxthuang,bsliu,zefengdu,shumingshi,zptu}@tencent.com # ABSTRACT
2306.09093#0
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
Although instruction-tuned large language models (LLMs) have exhibited remarkable capabilities across various NLP tasks, their effectiveness on other data modalities beyond text has not been fully studied. In this work, we propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual, audio, and textual information. Macaw-LLM consists of three main components: a modality module for encoding multi-modal data, a cognitive module for harnessing pretrained LLMs, and an alignment module for harmonizing diverse representations. Our novel alignment module seamlessly bridges multi-modal features to textual features, simplifying the adaptation process from the modality modules to the cognitive module. In addition, we construct a large-scale multi-modal instruction dataset in terms of multi-turn dialogue, including 69K image instances and 50K video instances. We have made our data, code and model publicly available, which we hope can pave the way for future research in multi-modal LLMs and expand the capabilities of LLMs to handle diverse data modalities and address complex real-world scenarios.
http://arxiv.org/pdf/2306.09093
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI, cs.CV
Longyue Wang is the corresponding author. Our project page is at https://github.com/lyuchenyang/Macaw-LLM
null
cs.CL
20230615
20230615
[]
2306.09093
1
# ABSTRACT Although instruction-tuned large language models (LLMs) have exhibited re- markable capabilities across various NLP tasks, their effectiveness on other data modalities beyond text has not been fully studied. In this work, we propose MACAW-LLM, a novel multi-modal LLM that seamlessly integrates visual, au- dio, and textual information. MACAW-LLM consists of three main components: a modality module for encoding multi-modal data, a cognitive module for har- nessing pretrained LLMs, and an alignment module for harmonizing diverse representations. Our novel alignment module seamlessly bridges multi-modal features to textual features, simplifying the adaptation process from the modal- ity modules to the congitive module. In addition, we construct a large-scale multi-modal instruction dataset in terms of multi-turn dialogue, including 69K image instances and 50K video instances. We have made our data, code and model publicly available, which we hope can pave the way for future research in multi-modal LLMs and expand the capabilities of LLMs to handle diverse data modalities and address complex real-world scenarios. ak @ https://github.com/lyuchenyang/Macaw-LLM 1 INTRODUCTION
2306.09093#1
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
Although instruction-tuned large language models (LLMs) have exhibited remarkable capabilities across various NLP tasks, their effectiveness on other data modalities beyond text has not been fully studied. In this work, we propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual, audio, and textual information. Macaw-LLM consists of three main components: a modality module for encoding multi-modal data, a cognitive module for harnessing pretrained LLMs, and an alignment module for harmonizing diverse representations. Our novel alignment module seamlessly bridges multi-modal features to textual features, simplifying the adaptation process from the modality modules to the cognitive module. In addition, we construct a large-scale multi-modal instruction dataset in terms of multi-turn dialogue, including 69K image instances and 50K video instances. We have made our data, code and model publicly available, which we hope can pave the way for future research in multi-modal LLMs and expand the capabilities of LLMs to handle diverse data modalities and address complex real-world scenarios.
http://arxiv.org/pdf/2306.09093
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI, cs.CV
Longyue Wang is the corresponding author. Our project page is at https://github.com/lyuchenyang/Macaw-LLM
null
cs.CL
20230615
20230615
[]
2306.09212
1
# ABSTRACT As the capabilities of large language models (LLMs) continue to advance, eval- uating their performance is becoming simultaneously more important and more challenging. This paper aims to address this issue for Mandarin Chinese in the form of CMMLU, a comprehensive Chinese benchmark that covers various subjects, including natural sciences, social sciences, engineering, and the humanities. We conduct a thorough evaluation of more than 20 contemporary multilingual and Chinese LLMs, assessing their performance across different subjects and settings. The results reveal that most existing LLMs struggle to achieve an accuracy of 60% even, which is the pass mark for Chinese exams. This highlights that there is signif- icant room for improvement in the capabilities of LLMs. Additionally, we conduct extensive experiments to identify factors impacting the models’ performance and propose directions for enhancing LLMs. CMMLU fills the gap in evaluating the knowledge and reasoning capabilities of large language models in the Chinese context. 1 # INTRODUCTION
2306.09212#1
CMMLU: Measuring massive multitask language understanding in Chinese
As the capabilities of large language models (LLMs) continue to advance, evaluating their performance becomes increasingly crucial and challenging. This paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese benchmark that covers various subjects, including natural science, social sciences, engineering, and humanities. We conduct a thorough evaluation of 18 advanced multilingual- and Chinese-oriented LLMs, assessing their performance across different subjects and settings. The results reveal that most existing LLMs struggle to achieve an average accuracy of 50%, even when provided with in-context examples and chain-of-thought prompts, whereas the random baseline stands at 25%. This highlights significant room for improvement in LLMs. Additionally, we conduct extensive experiments to identify factors impacting the models' performance and propose directions for enhancing LLMs. CMMLU fills the gap in evaluating the knowledge and reasoning capabilities of large language models within the Chinese context.
http://arxiv.org/pdf/2306.09212
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
cs.CL
null
null
cs.CL
20230615
20240117
[ { "id": "2302.13971" }, { "id": "2304.12986" }, { "id": "2307.00360" }, { "id": "2211.09110" }, { "id": "2305.08322" }, { "id": "2307.15020" }, { "id": "2307.09288" }, { "id": "2305.15011" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "2304.08177" }, { "id": "2305.10263" } ]
2306.09299
1
A hallmark property of explainable AI models is the ability to teach other agents, communicating knowledge of how to perform a task. While Large Language Models (LLMs) perform complex reasoning by generating explanations for their predictions, it is unclear whether they also make good teachers for weaker agents. To address this, we consider a student-teacher framework between two LLM agents and study if, when, and how the teacher should intervene with natural language explanations to improve the student’s performance. Since communication is ex- pensive, we define a budget such that the teacher only communicates explanations for a fraction of the data, after which the student should perform well on its own. We decompose the teaching problem along four axes: (1) if teacher’s test time in- tervention improve student predictions, (2) when it is worth explaining a data point, (3) how the teacher should personalize explanations to better teach the student, and (4) if teacher explanations also improve student performance on future unexplained data. We first show that teacher LLMs can indeed intervene on student reasoning to improve their performance. Next, inspired by the Theory of Mind abilities of effec- tive teachers, we propose building two few-shot mental
2306.09299#1
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
A hallmark property of explainable AI models is the ability to teach other agents, communicating knowledge of how to perform a task. While Large Language Models perform complex reasoning by generating explanations for their predictions, it is unclear whether they also make good teachers for weaker agents. To address this, we consider a student-teacher framework between two LLM agents and study if, when, and how the teacher should intervene with natural language explanations to improve the student's performance. Since communication is expensive, we define a budget such that the teacher only communicates explanations for a fraction of the data, after which the student should perform well on its own. We decompose the teaching problem along four axes: (1) if teacher's test time intervention improve student predictions, (2) when it is worth explaining a data point, (3) how the teacher should personalize explanations to better teach the student, and (4) if teacher explanations also improve students on future unexplained data. We first show that teacher LLMs can indeed intervene on student reasoning to improve their performance. Next, inspired by the Theory of Mind abilities of effective teachers, we propose building two few-shot mental models of the student. The first model defines an Intervention Function that simulates the utility of an intervention, allowing the teacher to intervene when this utility is the highest and improving student performance at lower budgets. The second model enables the teacher to personalize explanations for a particular student and outperform unpersonalized teachers. We also demonstrate that in multi-turn interactions, teacher explanations generalize and learning from explained data improves student performance on future unexplained data. Finally, we verify that misaligned teachers can lower student performance to random chance by intentionally misleading them.
http://arxiv.org/pdf/2306.09299
Swarnadeep Saha, Peter Hase, Mohit Bansal
cs.CL, cs.AI, cs.LG
NeurIPS 2023 (23 pages, 12 figures). Our code is available at https://github.com/swarnaHub/ExplanationIntervention
null
cs.CL
20230615
20231114
[ { "id": "2302.13971" }, { "id": "2007.12248" }, { "id": "2204.02311" }, { "id": "2302.08399" }, { "id": "2304.05489" }, { "id": "2304.11490" }, { "id": "2210.11416" }, { "id": "2110.14168" }, { "id": "2212.10071" }, { "id": "1702.08608" }, { "id": "2302.02083" }, { "id": "2301.12726" }, { "id": "2112.04359" }, { "id": "1503.02531" }, { "id": "2010.04119" }, { "id": "2303.12712" }, { "id": "2212.08410" }, { "id": "2303.17651" }, { "id": "2212.09721" }, { "id": "2305.11426" }, { "id": "2305.14763" } ]
2306.09328
1
@ Search Panel 2 dialogue Q [from machine reading comprehension to dialogue state tracking: bridging the gap] dialogue state tracking (dst) is at the heart of task-oriented dialogue systems [(mjulti{woz} 2.2 : a dialogue dataset with additional annotation corrections and state tracking baselines] multiwoz (budzianowski et al., 2018 ) is a well-known task-oriente... [annotation of greeting, introduction, and leavetaking in dialogues] dialogue act annotation aids understanding of interaction structure, and also in the desig [personalized extractive summarization using an ising machine towards real-time generation of efficient and coherent dialogue scenarios] we propose a [does this answer your question? towards dialogue management for restricted domain question answering systems] the main problem when going from taskoriented [amendable generation for dialogue state tracking] in task-oriented dialogue systems, recent dialogue state tracking methods tend to perform one-pass [automating template creation for ranking- based dialogue models] dialogue response generation models that use template © Control Panel © contour | Point = +#Grid | | Label v speech-recognition- model-language [personalizing dialogue agents via meta-
2306.09328#1
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
Machine learning models often learn latent embedding representations that capture the domain semantics of their training data. These embedding representations are valuable for interpreting trained models, building new models, and analyzing new datasets. However, interpreting and using embeddings can be challenging due to their opaqueness, high dimensionality, and the large size of modern datasets. To tackle these challenges, we present WizMap, an interactive visualization tool to help researchers and practitioners easily explore large embeddings. With a novel multi-resolution embedding summarization method and a familiar map-like interaction design, WizMap enables users to navigate and interpret embedding spaces with ease. Leveraging modern web technologies such as WebGL and Web Workers, WizMap scales to millions of embedding points directly in users' web browsers and computational notebooks without the need for dedicated backend servers. WizMap is open-source and available at the following public demo link: https://poloclub.github.io/wizmap.
http://arxiv.org/pdf/2306.09328
Zijie J. Wang, Fred Hohman, Duen Horng Chau
cs.LG, cs.CL, cs.CV, cs.HC
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see https://youtu.be/8fJG87QVceQ. For a live demo, see https://poloclub.github.io/wizmap. Code is available at https://github.com/poloclub/wizmap
null
cs.LG
20230615
20230615
[ { "id": "1810.04805" }, { "id": "2210.14896" }, { "id": "2205.03963" }, { "id": "2203.05794" }, { "id": "1808.09074" }, { "id": "1802.03426" } ]
2306.09442
1
Deploying large language models (LMs) can pose hazards from harmful outputs such as toxic or false text. Prior work has introduced automated tools that elicit harmful outputs to identify these risks. While this is a valuable step toward se- curing models, these approaches rely on a pre-existing way to efficiently classify undesirable outputs. Using a pre-existing classifier does not allow for red-teaming to be tailored to the target model. Furthermore, when failures can be easily clas- sified in advance, red-teaming has limited marginal value because problems can be avoided by simply filtering training data and/or model outputs. Here, we con- sider red-teaming “from scratch” in which the adversary does not begin with a way to classify failures. Our framework consists of three steps: 1) Exploring the model’s range of behaviors in the desired context; 2) Establishing a defini- tion and measurement for undesired behavior (e.g., a classifier trained to reflect human evaluations); and 3) Exploiting the model’s flaws using this measure to develop diverse adversarial prompts. We use this approach to red-team GPT-3 to
2306.09442#1
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
Deploying large language models (LMs) can pose hazards from harmful outputs such as toxic or false text. Prior work has introduced automated tools that elicit harmful outputs to identify these risks. While this is a valuable step toward securing models, these approaches rely on a pre-existing way to efficiently classify undesirable outputs. Using a pre-existing classifier does not allow for red-teaming to be tailored to the target model. Furthermore, when failures can be easily classified in advance, red-teaming has limited marginal value because problems can be avoided by simply filtering training data and/or model outputs. Here, we consider red-teaming "from scratch," in which the adversary does not begin with a way to classify failures. Our framework consists of three steps: 1) Exploring the model's range of behaviors in the desired context; 2) Establishing a definition and measurement for undesired behavior (e.g., a classifier trained to reflect human evaluations); and 3) Exploiting the model's flaws using this measure to develop diverse adversarial prompts. We use this approach to red-team GPT-3 to discover classes of inputs that elicit false statements. In doing so, we construct the CommonClaim dataset of 20,000 statements labeled by humans as common-knowledge-true, common knowledge-false, or neither. We are making code and data available.
http://arxiv.org/pdf/2306.09442
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230615
20231011
[ { "id": "2205.12548" }, { "id": "2202.03286" }, { "id": "1712.06751" }, { "id": "2206.10812" }, { "id": "2308.04265" }, { "id": "1803.05355" }, { "id": "2307.00175" }, { "id": "2203.07281" }, { "id": "1909.03242" }, { "id": "2307.02483" }, { "id": "2302.03668" }, { "id": "2203.11147" }, { "id": "2010.15980" }, { "id": "2302.06503" }, { "id": "2304.05197" }, { "id": "2103.06332" }, { "id": "2005.00174" }, { "id": "2104.13733" }, { "id": "2209.07858" }, { "id": "2205.14334" }, { "id": "1908.07125" }, { "id": "2212.08073" }, { "id": "2101.07691" }, { "id": "2307.15043" }, { "id": "2303.17548" }, { "id": "2109.01653" }, { "id": "2302.09664" }, { "id": "2212.03827" }, { "id": "2104.07567" }, { "id": "1812.05271" }, { "id": "1804.07461" }, { "id": "2104.08678" }, { "id": "2206.13316" }, { "id": "2302.08582" }, { "id": "2307.15217" }, { "id": "2303.04381" }, { "id": "1907.11692" }, { "id": "2212.09251" }, { "id": "2303.15056" }, { "id": "2212.10539" }, { "id": "2110.06674" }, { "id": "2009.02252" }, { "id": "2109.07958" }, { "id": "2005.00661" } ]
2306.09539
1
# Abstract State space models (SSMs) have shown impressive results on tasks that require modeling long-range dependencies and efficiently scale to long sequences owing to their subquadratic runtime complexity. Originally designed for continuous signals, SSMs have shown superior performance on a plethora of tasks, in vision and audio; however, SSMs still lag Transformer performance in Language Modeling tasks. In this work, we propose a hybrid layer named Block-State Transformer (BST), that internally combines an SSM sublayer for long-range contextualization, and a Block Transformer sublayer for short-term representation of sequences. We study three different, and completely parallelizable, variants that integrate SSMs and block- wise attention. We show that our model outperforms similar Transformer-based architectures on language modeling perplexity and generalizes to longer sequences. In addition, the Block-State Transformer demonstrates more than tenfold increase in speed at the layer level compared to the Block-Recurrent Transformer when model parallelization is employed. # Introduction
2306.09539#1
Block-State Transformers
State space models (SSMs) have shown impressive results on tasks that require modeling long-range dependencies and efficiently scale to long sequences owing to their subquadratic runtime complexity. Originally designed for continuous signals, SSMs have shown superior performance on a plethora of tasks, in vision and audio; however, SSMs still lag Transformer performance in Language Modeling tasks. In this work, we propose a hybrid layer named Block-State Transformer (BST), that internally combines an SSM sublayer for long-range contextualization, and a Block Transformer sublayer for short-term representation of sequences. We study three different, and completely parallelizable, variants that integrate SSMs and block-wise attention. We show that our model outperforms similar Transformer-based architectures on language modeling perplexity and generalizes to longer sequences. In addition, the Block-State Transformer demonstrates more than tenfold increase in speed at the layer level compared to the Block-Recurrent Transformer when model parallelization is employed.
http://arxiv.org/pdf/2306.09539
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
cs.CL, cs.LG
NeurIPS'23 - Thirty-seventh Conference on Neural Information Processing Systems
null
cs.CL
20230615
20231030
[ { "id": "1901.02860" } ]
2306.09093
2
ak @ https://github.com/lyuchenyang/Macaw-LLM 1 INTRODUCTION Instruction-tuned large language models (LLMs) have demonstrated impressive capabilities across various domains, exhibiting zero-shot generalization without the need for task-specific fine-tuning (Ouyang et al., 2022; Wei et al., 2022; Sanh et al., 2022; Chung et al., 2022; OpenAI, 2023). However, these models are primarily limited to processing text-based data. Previous research on multi-modal pre-training has shown promise in aligning knowledge from different modalities within a shared latent space (Wang et al., 2022a; Alayrac et al., 2022; Bao et al., 2022; Wang et al., 2022b). Furthermore, there is a recent line of research papers focusing on enabling multi-modal pre-trained models to understand and follow instructions (Xu et al., 2022; Zhu et al., 2023; Liu et al., 2023; Li et al., 2023a; Gong et al., 2023; Dai et al., 2023; Su et al., 2023; Huang et al., 2023).
2306.09093#2
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
Although instruction-tuned large language models (LLMs) have exhibited remarkable capabilities across various NLP tasks, their effectiveness on other data modalities beyond text has not been fully studied. In this work, we propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual, audio, and textual information. Macaw-LLM consists of three main components: a modality module for encoding multi-modal data, a cognitive module for harnessing pretrained LLMs, and an alignment module for harmonizing diverse representations. Our novel alignment module seamlessly bridges multi-modal features to textual features, simplifying the adaptation process from the modality modules to the cognitive module. In addition, we construct a large-scale multi-modal instruction dataset in terms of multi-turn dialogue, including 69K image instances and 50K video instances. We have made our data, code and model publicly available, which we hope can pave the way for future research in multi-modal LLMs and expand the capabilities of LLMs to handle diverse data modalities and address complex real-world scenarios.
http://arxiv.org/pdf/2306.09093
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI, cs.CV
Longyue Wang is the corresponding author. Our project page is at https://github.com/lyuchenyang/Macaw-LLM
null
cs.CL
20230615
20230615
[]
2306.09212
2
# INTRODUCTION Large language models (LLMs) have driven remarkable advancements in natural language processing and artificial intelligence, revolutionizing the field (Zhang et al., 2022; Scao et al., 2022; Zeng et al., 2023; Touvron et al., 2023a; OpenAI, 2023; Wu et al., 2023; Taori et al., 2023; Li et al., 2023a). However, assessing the knowledge and reasoning abilities of these models has become increasingly challenging, especially with the proliferation of LLMs that generate fluent and plausible responses.
2306.09212#2
CMMLU: Measuring massive multitask language understanding in Chinese
As the capabilities of large language models (LLMs) continue to advance, evaluating their performance becomes increasingly crucial and challenging. This paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese benchmark that covers various subjects, including natural science, social sciences, engineering, and humanities. We conduct a thorough evaluation of 18 advanced multilingual- and Chinese-oriented LLMs, assessing their performance across different subjects and settings. The results reveal that most existing LLMs struggle to achieve an average accuracy of 50%, even when provided with in-context examples and chain-of-thought prompts, whereas the random baseline stands at 25%. This highlights significant room for improvement in LLMs. Additionally, we conduct extensive experiments to identify factors impacting the models' performance and propose directions for enhancing LLMs. CMMLU fills the gap in evaluating the knowledge and reasoning capabilities of large language models within the Chinese context.
http://arxiv.org/pdf/2306.09212
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
cs.CL
null
null
cs.CL
20230615
20240117
[ { "id": "2302.13971" }, { "id": "2304.12986" }, { "id": "2307.00360" }, { "id": "2211.09110" }, { "id": "2305.08322" }, { "id": "2307.15020" }, { "id": "2307.09288" }, { "id": "2305.15011" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "2304.08177" }, { "id": "2305.10263" } ]
2306.09299
2
intervene on student reasoning to improve their performance. Next, inspired by the Theory of Mind abilities of effec- tive teachers, we propose building two few-shot mental models of the student. The first model defines an Intervention Function that simulates the utility of an interven- tion, allowing the teacher to intervene when this utility is the highest and improving student performance at lower budgets. The second model enables the teacher to personalize explanations for a particular student and outperform unpersonalized teachers. We also demonstrate that in multi-turn interactions, teacher explanations generalize and learning from explained data improves student performance on future unexplained data. Finally, we also verify that misaligned teachers can lower student performance to random chance by intentionally misleading them.1
2306.09299#2
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
A hallmark property of explainable AI models is the ability to teach other agents, communicating knowledge of how to perform a task. While Large Language Models perform complex reasoning by generating explanations for their predictions, it is unclear whether they also make good teachers for weaker agents. To address this, we consider a student-teacher framework between two LLM agents and study if, when, and how the teacher should intervene with natural language explanations to improve the student's performance. Since communication is expensive, we define a budget such that the teacher only communicates explanations for a fraction of the data, after which the student should perform well on its own. We decompose the teaching problem along four axes: (1) if teacher's test time intervention improve student predictions, (2) when it is worth explaining a data point, (3) how the teacher should personalize explanations to better teach the student, and (4) if teacher explanations also improve students on future unexplained data. We first show that teacher LLMs can indeed intervene on student reasoning to improve their performance. Next, inspired by the Theory of Mind abilities of effective teachers, we propose building two few-shot mental models of the student. The first model defines an Intervention Function that simulates the utility of an intervention, allowing the teacher to intervene when this utility is the highest and improving student performance at lower budgets. The second model enables the teacher to personalize explanations for a particular student and outperform unpersonalized teachers. We also demonstrate that in multi-turn interactions, teacher explanations generalize and learning from explained data improves student performance on future unexplained data. Finally, we verify that misaligned teachers can lower student performance to random chance by intentionally misleading them.
http://arxiv.org/pdf/2306.09299
Swarnadeep Saha, Peter Hase, Mohit Bansal
cs.CL, cs.AI, cs.LG
NeurIPS 2023 (23 pages, 12 figures). Our code is available at https://github.com/swarnaHub/ExplanationIntervention
null
cs.CL
20230615
20231114
[ { "id": "2302.13971" }, { "id": "2007.12248" }, { "id": "2204.02311" }, { "id": "2302.08399" }, { "id": "2304.05489" }, { "id": "2304.11490" }, { "id": "2210.11416" }, { "id": "2110.14168" }, { "id": "2212.10071" }, { "id": "1702.08608" }, { "id": "2302.02083" }, { "id": "2301.12726" }, { "id": "2112.04359" }, { "id": "1503.02531" }, { "id": "2010.04119" }, { "id": "2303.12712" }, { "id": "2212.08410" }, { "id": "2303.17651" }, { "id": "2212.09721" }, { "id": "2305.11426" }, { "id": "2305.14763" } ]
2306.09328
2
response generation models that use template © Control Panel © contour | Point = +#Grid | | Label v speech-recognition- model-language [personalizing dialogue agents via meta- learning] existing personalized dialogue models use human. grammar-parsing- grammars-language parsing-dependency- parser-treebank © discourse-parsing- dialogue-model puss Par generation-text- Tanguage-model question-answer- answering-aa hate-language- speech-detection task-language- offensive-detection ner-1 summarization-document-" summaries-summary relation-extraction- metaphor-metaphors- relations-knowledge @ Map View © Time » 0 translation-mt- translation-mt- language-machine {_J[_}speech-machine translation-machine- translation-based- based-statistical machine-model translation-machine- neural-nmt sense-word- disambiguation-wsd entity-named- recognition language-metaphorical clinical-medical- text-task +o 2 4% WizMap [DPaper | &Code | © Video | ACL Abstracts | 63,213 Data Points 1.0542 1
2306.09328#2
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
Machine learning models often learn latent embedding representations that capture the domain semantics of their training data. These embedding representations are valuable for interpreting trained models, building new models, and analyzing new datasets. However, interpreting and using embeddings can be challenging due to their opaqueness, high dimensionality, and the large size of modern datasets. To tackle these challenges, we present WizMap, an interactive visualization tool to help researchers and practitioners easily explore large embeddings. With a novel multi-resolution embedding summarization method and a familiar map-like interaction design, WizMap enables users to navigate and interpret embedding spaces with ease. Leveraging modern web technologies such as WebGL and Web Workers, WizMap scales to millions of embedding points directly in users' web browsers and computational notebooks without the need for dedicated backend servers. WizMap is open-source and available at the following public demo link: https://poloclub.github.io/wizmap.
http://arxiv.org/pdf/2306.09328
Zijie J. Wang, Fred Hohman, Duen Horng Chau
cs.LG, cs.CL, cs.CV, cs.HC
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see https://youtu.be/8fJG87QVceQ. For a live demo, see https://poloclub.github.io/wizmap. Code is available at https://github.com/poloclub/wizmap
null
cs.LG
20230615
20230615
[ { "id": "1810.04805" }, { "id": "2210.14896" }, { "id": "2205.03963" }, { "id": "2203.05794" }, { "id": "1808.09074" }, { "id": "1802.03426" } ]
2306.09442
2
Exploiting the model’s flaws using this measure to develop diverse adversarial prompts. We use this approach to red-team GPT-3 to discover classes of inputs that elicit false statements. In doing so, we construct the CommonClaim dataset of 20,000 statements labeled by humans as common- knowledge-true, common knowledge-false, or neither. Code is available at this https url. CommonClaim is available at this https url.
2306.09442#2
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
Deploying large language models (LMs) can pose hazards from harmful outputs such as toxic or false text. Prior work has introduced automated tools that elicit harmful outputs to identify these risks. While this is a valuable step toward securing models, these approaches rely on a pre-existing way to efficiently classify undesirable outputs. Using a pre-existing classifier does not allow for red-teaming to be tailored to the target model. Furthermore, when failures can be easily classified in advance, red-teaming has limited marginal value because problems can be avoided by simply filtering training data and/or model outputs. Here, we consider red-teaming "from scratch," in which the adversary does not begin with a way to classify failures. Our framework consists of three steps: 1) Exploring the model's range of behaviors in the desired context; 2) Establishing a definition and measurement for undesired behavior (e.g., a classifier trained to reflect human evaluations); and 3) Exploiting the model's flaws using this measure to develop diverse adversarial prompts. We use this approach to red-team GPT-3 to discover classes of inputs that elicit false statements. In doing so, we construct the CommonClaim dataset of 20,000 statements labeled by humans as common-knowledge-true, common knowledge-false, or neither. We are making code and data available.
http://arxiv.org/pdf/2306.09442
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230615
20231011
[ { "id": "2205.12548" }, { "id": "2202.03286" }, { "id": "1712.06751" }, { "id": "2206.10812" }, { "id": "2308.04265" }, { "id": "1803.05355" }, { "id": "2307.00175" }, { "id": "2203.07281" }, { "id": "1909.03242" }, { "id": "2307.02483" }, { "id": "2302.03668" }, { "id": "2203.11147" }, { "id": "2010.15980" }, { "id": "2302.06503" }, { "id": "2304.05197" }, { "id": "2103.06332" }, { "id": "2005.00174" }, { "id": "2104.13733" }, { "id": "2209.07858" }, { "id": "2205.14334" }, { "id": "1908.07125" }, { "id": "2212.08073" }, { "id": "2101.07691" }, { "id": "2307.15043" }, { "id": "2303.17548" }, { "id": "2109.01653" }, { "id": "2302.09664" }, { "id": "2212.03827" }, { "id": "2104.07567" }, { "id": "1812.05271" }, { "id": "1804.07461" }, { "id": "2104.08678" }, { "id": "2206.13316" }, { "id": "2302.08582" }, { "id": "2307.15217" }, { "id": "2303.04381" }, { "id": "1907.11692" }, { "id": "2212.09251" }, { "id": "2303.15056" }, { "id": "2212.10539" }, { "id": "2110.06674" }, { "id": "2009.02252" }, { "id": "2109.07958" }, { "id": "2005.00661" } ]
2306.09539
2
# Introduction Transformers have shown impressive performance on a wide range of natural language processing (NLP) tasks. While they have been primarily used for language modeling the Transformer architecture [40] has also been successfully applied to other tasks outside of the NLP and have mostly replaced Recurrent Neural Networks (RNNs). Several factors contribute to this success, including computa- tional efficiency and architectural inductive biases that are well-suited for training on natural language tasks at scale. On the computational upside, Transformers are able to process tokens of a given input sequence in parallel, making the most of modern accelerator hardware. Moreover, the attention mechanism enables Transformers to find relationships in longer sequences by providing ready access to all the extracted information from past tokens when inferring the next token. Compared to RNNs and LSTMs [19], the benefits of self-attention are two-fold: (i) the capacity of what could be stored and directly accessible as context is drastically increased, and (ii) training on longer sequences is more stable [18, 23].
2306.09539#2
Block-State Transformers
State space models (SSMs) have shown impressive results on tasks that require modeling long-range dependencies and efficiently scale to long sequences owing to their subquadratic runtime complexity. Originally designed for continuous signals, SSMs have shown superior performance on a plethora of tasks, in vision and audio; however, SSMs still lag Transformer performance in Language Modeling tasks. In this work, we propose a hybrid layer named Block-State Transformer (BST), that internally combines an SSM sublayer for long-range contextualization, and a Block Transformer sublayer for short-term representation of sequences. We study three different, and completely parallelizable, variants that integrate SSMs and block-wise attention. We show that our model outperforms similar Transformer-based architectures on language modeling perplexity and generalizes to longer sequences. In addition, the Block-State Transformer demonstrates more than tenfold increase in speed at the layer level compared to the Block-Recurrent Transformer when model parallelization is employed.
http://arxiv.org/pdf/2306.09539
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
cs.CL, cs.LG
NeurIPS'23 - Thirty-seventh Conference on Neural Information Processing Systems
null
cs.CL
20230615
20231030
[ { "id": "1901.02860" } ]
2306.09093
3
In this work, we propose MACAW-LLM, a multi-modal instruction-tuned LLM that integrates four different modalities, including image, video, audio, and text, into one single model. We pro- pose a novel alignment approach that aligns multi-modal features to the embeddings of LLMs, which produces aligned features that are closer to the textual features of language models and can be naturally injected into the input sequence of LLMs. A key motivation for our approach is to streamline the adaptation process for LLMs. In particular, MACAW-LLM employs a one-stage ∗ Longyue Wang is the corresponding author: [email protected]. 1 Preprint (work in progress)
2306.09093#3
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
Although instruction-tuned large language models (LLMs) have exhibited remarkable capabilities across various NLP tasks, their effectiveness on other data modalities beyond text has not been fully studied. In this work, we propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual, audio, and textual information. Macaw-LLM consists of three main components: a modality module for encoding multi-modal data, a cognitive module for harnessing pretrained LLMs, and an alignment module for harmonizing diverse representations. Our novel alignment module seamlessly bridges multi-modal features to textual features, simplifying the adaptation process from the modality modules to the cognitive module. In addition, we construct a large-scale multi-modal instruction dataset in terms of multi-turn dialogue, including 69K image instances and 50K video instances. We have made our data, code and model publicly available, which we hope can pave the way for future research in multi-modal LLMs and expand the capabilities of LLMs to handle diverse data modalities and address complex real-world scenarios.
http://arxiv.org/pdf/2306.09093
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI, cs.CV
Longyue Wang is the corresponding author. Our project page is at https://github.com/lyuchenyang/Macaw-LLM
null
cs.CL
20230615
20230615
[]
2306.09212
3
To this end, researchers have created various benchmarks intended to evaluate different model capabilities (Wang et al., 2019b;a; Lin et al., 2022; Zellers et al., 2019; Hendrycks et al., 2021b; Chen et al., 2021). Specifically, Hendrycks et al. (2021a) proposed MMLU, a benchmark that encompasses various tasks ranging from elementary mathematics and computer science to management and law, which can be used to comprehensively measure LLM capabilities in terms of the knowledge embedded in them. Due to its multiple-choice question format, which facilitates easy evaluation, and the breadth of subject areas it encompasses, it has become widely used as a fundamental assessment tool of the knowledge encoded by LLMs. However, this benchmark is in English, which limits its ability to assess LLMs in other languages. Although some researchers (OpenAI, 2023) have attempted to automatically translate it to evaluate LLMs in other languages, the inherent bias towards Western (and specifically US) culture in the dataset renders it unsuitable and even inappropriate for assessing LLMs across diverse cultures and languages.
2306.09212#3
CMMLU: Measuring massive multitask language understanding in Chinese
As the capabilities of large language models (LLMs) continue to advance, evaluating their performance becomes increasingly crucial and challenging. This paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese benchmark that covers various subjects, including natural science, social sciences, engineering, and humanities. We conduct a thorough evaluation of 18 advanced multilingual- and Chinese-oriented LLMs, assessing their performance across different subjects and settings. The results reveal that most existing LLMs struggle to achieve an average accuracy of 50%, even when provided with in-context examples and chain-of-thought prompts, whereas the random baseline stands at 25%. This highlights significant room for improvement in LLMs. Additionally, we conduct extensive experiments to identify factors impacting the models' performance and propose directions for enhancing LLMs. CMMLU fills the gap in evaluating the knowledge and reasoning capabilities of large language models within the Chinese context.
http://arxiv.org/pdf/2306.09212
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
cs.CL
null
null
cs.CL
20230615
20240117
[ { "id": "2302.13971" }, { "id": "2304.12986" }, { "id": "2307.00360" }, { "id": "2211.09110" }, { "id": "2305.08322" }, { "id": "2307.15020" }, { "id": "2307.09288" }, { "id": "2305.15011" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "2304.08177" }, { "id": "2305.10263" } ]
2306.09299
3
# Introduction Teaching, or the ability to provide needed information in a way that is understood by others, is often considered an important property of Explainable AI [1]. When AI models “teach” by providing meaningful and interpretable explanations, it fosters transparency, warranted trust, and the ability for humans to make informed decisions based on AI recommendations. One way the goodness of an explanation can be judged is by its ability to communicate knowledge of how to solve a problem to other agents [2, 3]. Explanations fulfill this purpose not only by being informative but also by means of filling in specific gaps in the recipient agent’s knowledge. This is enabled by the explainer having theory of mind (ToM), understanding what the recipient does not know [2], and being able to personalize its explanations based on the recipient’s needs. Recent work has argued that LLMs like GPT-3.5 now exhibit ToM, based on their ability to answer questions about mental states of # 1Code for all experiments: https://github.com/swarnaHub/ExplanationIntervention 37th Conference on Neural Information Processing Systems (NeurIPS 2023).
2306.09299#3
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
A hallmark property of explainable AI models is the ability to teach other agents, communicating knowledge of how to perform a task. While Large Language Models perform complex reasoning by generating explanations for their predictions, it is unclear whether they also make good teachers for weaker agents. To address this, we consider a student-teacher framework between two LLM agents and study if, when, and how the teacher should intervene with natural language explanations to improve the student's performance. Since communication is expensive, we define a budget such that the teacher only communicates explanations for a fraction of the data, after which the student should perform well on its own. We decompose the teaching problem along four axes: (1) if teacher's test time intervention improve student predictions, (2) when it is worth explaining a data point, (3) how the teacher should personalize explanations to better teach the student, and (4) if teacher explanations also improve students on future unexplained data. We first show that teacher LLMs can indeed intervene on student reasoning to improve their performance. Next, inspired by the Theory of Mind abilities of effective teachers, we propose building two few-shot mental models of the student. The first model defines an Intervention Function that simulates the utility of an intervention, allowing the teacher to intervene when this utility is the highest and improving student performance at lower budgets. The second model enables the teacher to personalize explanations for a particular student and outperform unpersonalized teachers. We also demonstrate that in multi-turn interactions, teacher explanations generalize and learning from explained data improves student performance on future unexplained data. Finally, we verify that misaligned teachers can lower student performance to random chance by intentionally misleading them.
http://arxiv.org/pdf/2306.09299
Swarnadeep Saha, Peter Hase, Mohit Bansal
cs.CL, cs.AI, cs.LG
NeurIPS 2023 (23 pages, 12 figures). Our code is available at https://github.com/swarnaHub/ExplanationIntervention
null
cs.CL
20230615
20231114
[ { "id": "2302.13971" }, { "id": "2007.12248" }, { "id": "2204.02311" }, { "id": "2302.08399" }, { "id": "2304.05489" }, { "id": "2304.11490" }, { "id": "2210.11416" }, { "id": "2110.14168" }, { "id": "2212.10071" }, { "id": "1702.08608" }, { "id": "2302.02083" }, { "id": "2301.12726" }, { "id": "2112.04359" }, { "id": "1503.02531" }, { "id": "2010.04119" }, { "id": "2303.12712" }, { "id": "2212.08410" }, { "id": "2303.17651" }, { "id": "2212.09721" }, { "id": "2305.11426" }, { "id": "2305.14763" } ]
2306.09328
3
Fig. 1: WIZMAP empowers machine learning researchers and domain experts to easily explore and interpret millions of embedding vectors across different levels of granularity. Consider the task of investigating the embeddings of all 63k natural language processing paper abstracts indexed in ACL Anthology from 1980 to 2022. (A) The Map View tightly integrates a contour layer, a scatter plot, and automatically-generated multi-resolution embedding summaries to help users navigate through the large embedding space. (B) The Search Panel enables users to rapidly test their hypotheses through fast full-text embedding search. (C) The Control Panel allows users to customize embedding visualizations, compare multiple embedding groups, and observe how embeddings evolve over time. # Abstract
2306.09328#3
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
Machine learning models often learn latent embedding representations that capture the domain semantics of their training data. These embedding representations are valuable for interpreting trained models, building new models, and analyzing new datasets. However, interpreting and using embeddings can be challenging due to their opaqueness, high dimensionality, and the large size of modern datasets. To tackle these challenges, we present WizMap, an interactive visualization tool to help researchers and practitioners easily explore large embeddings. With a novel multi-resolution embedding summarization method and a familiar map-like interaction design, WizMap enables users to navigate and interpret embedding spaces with ease. Leveraging modern web technologies such as WebGL and Web Workers, WizMap scales to millions of embedding points directly in users' web browsers and computational notebooks without the need for dedicated backend servers. WizMap is open-source and available at the following public demo link: https://poloclub.github.io/wizmap.
http://arxiv.org/pdf/2306.09328
Zijie J. Wang, Fred Hohman, Duen Horng Chau
cs.LG, cs.CL, cs.CV, cs.HC
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see https://youtu.be/8fJG87QVceQ. For a live demo, see https://poloclub.github.io/wizmap. Code is available at https://github.com/poloclub/wizmap
null
cs.LG
20230615
20230615
[ { "id": "1810.04805" }, { "id": "2210.14896" }, { "id": "2205.03963" }, { "id": "2203.05794" }, { "id": "1808.09074" }, { "id": "1802.03426" } ]
2306.09442
3
# INTRODUCTION The vulnerability of large language models (LMs) to problems such as hallucination (Ji et al., 2023), harmful biases (Santurkar et al., 2023; Perez et al., 2022b), and jailbreaks (Oneal, 2023; Li et al., 2023; Liu et al., 2023; Rao et al., 2023; Wei et al., 2023) highlights a need to discover flaws before deployment. This is challenging because the space of possible prompts and outputs for LMs is mas- sive. One way to do this practically is with automated red-teaming. Automated red-teaming tools search for inputs that elicit undesired responses. For example, Perez et al. (2022a) use reinforcement learning (RL) to curate prompts that cause a model to generate toxic responses, and Zou et al. (2023) use a combination of targeted search techniques to identify jailbreaks.
2306.09442#3
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
Deploying large language models (LMs) can pose hazards from harmful outputs such as toxic or false text. Prior work has introduced automated tools that elicit harmful outputs to identify these risks. While this is a valuable step toward securing models, these approaches rely on a pre-existing way to efficiently classify undesirable outputs. Using a pre-existing classifier does not allow for red-teaming to be tailored to the target model. Furthermore, when failures can be easily classified in advance, red-teaming has limited marginal value because problems can be avoided by simply filtering training data and/or model outputs. Here, we consider red-teaming "from scratch," in which the adversary does not begin with a way to classify failures. Our framework consists of three steps: 1) Exploring the model's range of behaviors in the desired context; 2) Establishing a definition and measurement for undesired behavior (e.g., a classifier trained to reflect human evaluations); and 3) Exploiting the model's flaws using this measure to develop diverse adversarial prompts. We use this approach to red-team GPT-3 to discover classes of inputs that elicit false statements. In doing so, we construct the CommonClaim dataset of 20,000 statements labeled by humans as common-knowledge-true, common knowledge-false, or neither. We are making code and data available.
http://arxiv.org/pdf/2306.09442
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230615
20231011
[ { "id": "2205.12548" }, { "id": "2202.03286" }, { "id": "1712.06751" }, { "id": "2206.10812" }, { "id": "2308.04265" }, { "id": "1803.05355" }, { "id": "2307.00175" }, { "id": "2203.07281" }, { "id": "1909.03242" }, { "id": "2307.02483" }, { "id": "2302.03668" }, { "id": "2203.11147" }, { "id": "2010.15980" }, { "id": "2302.06503" }, { "id": "2304.05197" }, { "id": "2103.06332" }, { "id": "2005.00174" }, { "id": "2104.13733" }, { "id": "2209.07858" }, { "id": "2205.14334" }, { "id": "1908.07125" }, { "id": "2212.08073" }, { "id": "2101.07691" }, { "id": "2307.15043" }, { "id": "2303.17548" }, { "id": "2109.01653" }, { "id": "2302.09664" }, { "id": "2212.03827" }, { "id": "2104.07567" }, { "id": "1812.05271" }, { "id": "1804.07461" }, { "id": "2104.08678" }, { "id": "2206.13316" }, { "id": "2302.08582" }, { "id": "2307.15217" }, { "id": "2303.04381" }, { "id": "1907.11692" }, { "id": "2212.09251" }, { "id": "2303.15056" }, { "id": "2212.10539" }, { "id": "2110.06674" }, { "id": "2009.02252" }, { "id": "2109.07958" }, { "id": "2005.00661" } ]
2306.09539
3
Given the remarkable achievements of Transformers in language modeling tasks, and their improved performance at scale on hard NLP tasks such as reasoning and question answering [2, 39, 6], the demand for deploying even deeper and larger networks is greater than ever before. An orthogonal scaling dimension, which could be potentially even more consequential, is the size of the input sequence. Despite the several advantages of Transformers over RNNs, it is still problematic to scale the input sequence length, again for both computational performance and quality reasons. Further, the Transformer’s runtime is quadratic with respect to the input sequence length, which makes training these models increasingly expensive. Furthermore, Transformers with attention, that is local [8], sparse [4, 43, 36], low-rank approximated [41] or linearized via kernel methods [5, 22], notoriously struggle on long-input classification tasks [37]. Vanilla transformers can be unstable when trained # ∗Equal Contribution. 37th Conference on Neural Information Processing Systems (NeurIPS 2023).
2306.09539#3
Block-State Transformers
State space models (SSMs) have shown impressive results on tasks that require modeling long-range dependencies and efficiently scale to long sequences owing to their subquadratic runtime complexity. Originally designed for continuous signals, SSMs have shown superior performance on a plethora of tasks, in vision and audio; however, SSMs still lag Transformer performance in Language Modeling tasks. In this work, we propose a hybrid layer named Block-State Transformer (BST), that internally combines an SSM sublayer for long-range contextualization, and a Block Transformer sublayer for short-term representation of sequences. We study three different, and completely parallelizable, variants that integrate SSMs and block-wise attention. We show that our model outperforms similar Transformer-based architectures on language modeling perplexity and generalizes to longer sequences. In addition, the Block-State Transformer demonstrates more than tenfold increase in speed at the layer level compared to the Block-Recurrent Transformer when model parallelization is employed.
http://arxiv.org/pdf/2306.09539
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
cs.CL, cs.LG
NeurIPS'23 - Thirty-seventh Conference on Neural Information Processing Systems
null
cs.CL
20230615
20231030
[ { "id": "1901.02860" } ]
2306.09093
4
∗ Longyue Wang is the corresponding author: [email protected]. 1 Preprint (work in progress) instruction fine-tuning process, promoting a simpler learning experience. Previous multi-modal systems typically require two-stage training Li et al. (2023c); Zhu et al. (2023); Liu et al. (2023); Dai et al. (2023), where the first stage usually trains the projection layer for alignment between multi- modal features and text features, and the second stage is the general instruction fine-tuning for LLMs. In contrast, our approach aligns the multi-modal features to the embedding layer of LLMs, which produce aligned features based on LLMs embeddings that can be naturally injected into the input sequence of LLMs. This makes our approach more advantageous.
2306.09093#4
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
Although instruction-tuned large language models (LLMs) have exhibited remarkable capabilities across various NLP tasks, their effectiveness on other data modalities beyond text has not been fully studied. In this work, we propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual, audio, and textual information. Macaw-LLM consists of three main components: a modality module for encoding multi-modal data, a cognitive module for harnessing pretrained LLMs, and an alignment module for harmonizing diverse representations. Our novel alignment module seamlessly bridges multi-modal features to textual features, simplifying the adaptation process from the modality modules to the cognitive module. In addition, we construct a large-scale multi-modal instruction dataset in terms of multi-turn dialogue, including 69K image instances and 50K video instances. We have made our data, code and model publicly available, which we hope can pave the way for future research in multi-modal LLMs and expand the capabilities of LLMs to handle diverse data modalities and address complex real-world scenarios.
http://arxiv.org/pdf/2306.09093
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI, cs.CV
Longyue Wang is the corresponding author. Our project page is at https://github.com/lyuchenyang/Macaw-LLM
null
cs.CL
20230615
20230615
[]
2306.09212
4
In this paper, we propose CMMLU (Figure 1), a comprehensive Chinese assessment suite specifically designed to evaluate the advanced knowledge and reasoning abilities of LLMs in a Chinese linguistic and cultural context. CMMLU covers a wide range of subjects, comprising 67 topics from elementary to advanced professional levels. It includes subjects that require computational expertise, such as physics and mathematics, as well as disciplines within the humanities and social sciences. Many of these tasks are not easily translatable from other languages due to their specific contextual nuances 1The data and evaluation code are available at https://github.com/haonan-li/CMMLU 1 Under review
2306.09212#4
CMMLU: Measuring massive multitask language understanding in Chinese
As the capabilities of large language models (LLMs) continue to advance, evaluating their performance becomes increasingly crucial and challenging. This paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese benchmark that covers various subjects, including natural science, social sciences, engineering, and humanities. We conduct a thorough evaluation of 18 advanced multilingual- and Chinese-oriented LLMs, assessing their performance across different subjects and settings. The results reveal that most existing LLMs struggle to achieve an average accuracy of 50%, even when provided with in-context examples and chain-of-thought prompts, whereas the random baseline stands at 25%. This highlights significant room for improvement in LLMs. Additionally, we conduct extensive experiments to identify factors impacting the models' performance and propose directions for enhancing LLMs. CMMLU fills the gap in evaluating the knowledge and reasoning capabilities of large language models within the Chinese context.
http://arxiv.org/pdf/2306.09212
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
cs.CL
null
null
cs.CL
20230615
20240117
[ { "id": "2302.13971" }, { "id": "2304.12986" }, { "id": "2307.00360" }, { "id": "2211.09110" }, { "id": "2305.08322" }, { "id": "2307.15020" }, { "id": "2307.09288" }, { "id": "2305.15011" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "2304.08177" }, { "id": "2305.10263" } ]
2306.09299
4
# 1Code for all experiments: https://github.com/swarnaHub/ExplanationIntervention 37th Conference on Neural Information Processing Systems (NeurIPS 2023). wa = 74 | ----» Model Output a st @ ¢.. © 3 Prompt” > Prompt Input iStidend Prompt w----- Teacher Student 1 Theory of 9) 1" Presntenention FO) | Mind Prompt y~ 17 *LE: * simulation Prompt ©” US Budget seepeee Intervention Function (Expected Utility) Not Intervened Predictions Intervention 4. _ student Prompt 1 Postintervention ' (aay Simulation Prompt * ee Intervention ! Pre-Intervention Simulation Prompt “Q: question} TTT {student_expl} So the answer is {pre_student_pred} No Intervention Student Prompt {question} A: {gold_expl} So the answer is {gold_answer} {question} {helpful_gold_expl} So the answer is {gold_ans} {test_question} 1 1 1 1 1 {question} q : {helpful_gold_expl} So the 1 1 1 1 ' ' ' " answer is {gold_ans} {gold_expl} So the answer is {gold_answer} {test_question} : 3 Q: {test_question}
2306.09299#4
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
A hallmark property of explainable AI models is the ability to teach other agents, communicating knowledge of how to perform a task. While Large Language Models perform complex reasoning by generating explanations for their predictions, it is unclear whether they also make good teachers for weaker agents. To address this, we consider a student-teacher framework between two LLM agents and study if, when, and how the teacher should intervene with natural language explanations to improve the student's performance. Since communication is expensive, we define a budget such that the teacher only communicates explanations for a fraction of the data, after which the student should perform well on its own. We decompose the teaching problem along four axes: (1) if teacher's test time intervention improve student predictions, (2) when it is worth explaining a data point, (3) how the teacher should personalize explanations to better teach the student, and (4) if teacher explanations also improve students on future unexplained data. We first show that teacher LLMs can indeed intervene on student reasoning to improve their performance. Next, inspired by the Theory of Mind abilities of effective teachers, we propose building two few-shot mental models of the student. The first model defines an Intervention Function that simulates the utility of an intervention, allowing the teacher to intervene when this utility is the highest and improving student performance at lower budgets. The second model enables the teacher to personalize explanations for a particular student and outperform unpersonalized teachers. We also demonstrate that in multi-turn interactions, teacher explanations generalize and learning from explained data improves student performance on future unexplained data. Finally, we verify that misaligned teachers can lower student performance to random chance by intentionally misleading them.
http://arxiv.org/pdf/2306.09299
Swarnadeep Saha, Peter Hase, Mohit Bansal
cs.CL, cs.AI, cs.LG
NeurIPS 2023 (23 pages, 12 figures). Our code is available at https://github.com/swarnaHub/ExplanationIntervention
null
cs.CL
20230615
20231114
[ { "id": "2302.13971" }, { "id": "2007.12248" }, { "id": "2204.02311" }, { "id": "2302.08399" }, { "id": "2304.05489" }, { "id": "2304.11490" }, { "id": "2210.11416" }, { "id": "2110.14168" }, { "id": "2212.10071" }, { "id": "1702.08608" }, { "id": "2302.02083" }, { "id": "2301.12726" }, { "id": "2112.04359" }, { "id": "1503.02531" }, { "id": "2010.04119" }, { "id": "2303.12712" }, { "id": "2212.08410" }, { "id": "2303.17651" }, { "id": "2212.09721" }, { "id": "2305.11426" }, { "id": "2305.14763" } ]
2306.09328
4
# Abstract Machine learning models often learn latent em- bedding representations that capture the do- main semantics of their training data. These embedding representations are valuable for in- terpreting trained models, building new models, and analyzing new datasets. However, interpret- ing and using embeddings can be challenging due to their opaqueness, high dimensionality, and the large size of modern datasets. To tackle these challenges, we present WIZMAP, an in- teractive visualization tool to help researchers and practitioners easily explore large embed- dings. With a novel multi-resolution embed- ding summarization method and a familiar map- like interaction design, WIZMAP enables users to navigate and interpret embedding spaces with ease. Leveraging modern web technologies such as WebGL and Web Workers, WIZ- MAP scales to millions of embedding points directly in users’ web browsers and computa- tional notebooks without the need for dedicated backend servers. WIZMAP is open-source and available at the following public demo link: https://poloclub.github.io/wizmap. # Introduction
2306.09328#4
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
Machine learning models often learn latent embedding representations that capture the domain semantics of their training data. These embedding representations are valuable for interpreting trained models, building new models, and analyzing new datasets. However, interpreting and using embeddings can be challenging due to their opaqueness, high dimensionality, and the large size of modern datasets. To tackle these challenges, we present WizMap, an interactive visualization tool to help researchers and practitioners easily explore large embeddings. With a novel multi-resolution embedding summarization method and a familiar map-like interaction design, WizMap enables users to navigate and interpret embedding spaces with ease. Leveraging modern web technologies such as WebGL and Web Workers, WizMap scales to millions of embedding points directly in users' web browsers and computational notebooks without the need for dedicated backend servers. WizMap is open-source and available at the following public demo link: https://poloclub.github.io/wizmap.
http://arxiv.org/pdf/2306.09328
Zijie J. Wang, Fred Hohman, Duen Horng Chau
cs.LG, cs.CL, cs.CV, cs.HC
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see https://youtu.be/8fJG87QVceQ. For a live demo, see https://poloclub.github.io/wizmap. Code is available at https://github.com/poloclub/wizmap
null
cs.LG
20230615
20230615
[ { "id": "1810.04805" }, { "id": "2210.14896" }, { "id": "2205.03963" }, { "id": "2203.05794" }, { "id": "1808.09074" }, { "id": "1802.03426" } ]
2306.09442
4
These approaches are valuable, but they require that the harmful behavior can be identified efficiently beforehand. For instance, Perez et al. (2022b) depend on a pre-existing toxicity classifier, and Zou et al. (2023) use specific, user-provided phrases as target outputs. This is unrealistic for many applications. Often, the red team must work from a more abstract specification and tailor their work to a specific model. Most importantly, if failures can already be efficiently identified in advance, then red-teaming has limited value because bad text could simply be filtered from the model’s training data and/or outputs. In Section 4, we review red-teaming research and find that it rarely confronts the challenge of classifying harmful output or accounts for simple filtering baselines. In this work, we introduce an automatic red-teaming framework that does not assume that the red Instead, they must work from an abstract team starts with an efficient way to identify failures. 1 Preprint
2306.09442#4
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
Deploying large language models (LMs) can pose hazards from harmful outputs such as toxic or false text. Prior work has introduced automated tools that elicit harmful outputs to identify these risks. While this is a valuable step toward securing models, these approaches rely on a pre-existing way to efficiently classify undesirable outputs. Using a pre-existing classifier does not allow for red-teaming to be tailored to the target model. Furthermore, when failures can be easily classified in advance, red-teaming has limited marginal value because problems can be avoided by simply filtering training data and/or model outputs. Here, we consider red-teaming "from scratch," in which the adversary does not begin with a way to classify failures. Our framework consists of three steps: 1) Exploring the model's range of behaviors in the desired context; 2) Establishing a definition and measurement for undesired behavior (e.g., a classifier trained to reflect human evaluations); and 3) Exploiting the model's flaws using this measure to develop diverse adversarial prompts. We use this approach to red-team GPT-3 to discover classes of inputs that elicit false statements. In doing so, we construct the CommonClaim dataset of 20,000 statements labeled by humans as common-knowledge-true, common knowledge-false, or neither. We are making code and data available.
http://arxiv.org/pdf/2306.09442
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230615
20231011
[ { "id": "2205.12548" }, { "id": "2202.03286" }, { "id": "1712.06751" }, { "id": "2206.10812" }, { "id": "2308.04265" }, { "id": "1803.05355" }, { "id": "2307.00175" }, { "id": "2203.07281" }, { "id": "1909.03242" }, { "id": "2307.02483" }, { "id": "2302.03668" }, { "id": "2203.11147" }, { "id": "2010.15980" }, { "id": "2302.06503" }, { "id": "2304.05197" }, { "id": "2103.06332" }, { "id": "2005.00174" }, { "id": "2104.13733" }, { "id": "2209.07858" }, { "id": "2205.14334" }, { "id": "1908.07125" }, { "id": "2212.08073" }, { "id": "2101.07691" }, { "id": "2307.15043" }, { "id": "2303.17548" }, { "id": "2109.01653" }, { "id": "2302.09664" }, { "id": "2212.03827" }, { "id": "2104.07567" }, { "id": "1812.05271" }, { "id": "1804.07461" }, { "id": "2104.08678" }, { "id": "2206.13316" }, { "id": "2302.08582" }, { "id": "2307.15217" }, { "id": "2303.04381" }, { "id": "1907.11692" }, { "id": "2212.09251" }, { "id": "2303.15056" }, { "id": "2212.10539" }, { "id": "2110.06674" }, { "id": "2009.02252" }, { "id": "2109.07958" }, { "id": "2005.00661" } ]
2306.09539
4
# ∗Equal Contribution. 37th Conference on Neural Information Processing Systems (NeurIPS 2023). block output token embeddings output token embedding eater: int 1% = Ea . g Sy f f f ! o linear self- = a! Block »{ Block »{ Block ' z KV,Q ite g Ey TRE*, TRE TRE ' z 3 ran ' S & = ic) — fom Homie : ae We o! 1 g pia g gt context states ' a a ' g é! ' = Saat Sia Bt State-Space Model ' é d ie aN ’ i linear context . wee eee ee a ee Block g IDs input token embedding Transformer ————— block context states Figure 1: Block-State Transformer layer. The BST-SH layer is illustrated on the left, and includes a state space model (SSM, in green) and Block Transformers (in red). For demonstration purposes the sequence is divided into 3 blocks in the picture. The details of the Block Transformer sublayer are on the right. *TRF = Transformer. on long sequences [26] and token importance is concentrated in a local receptive field of around 50 tokens around the current time step [35].
2306.09539#4
Block-State Transformers
State space models (SSMs) have shown impressive results on tasks that require modeling long-range dependencies and efficiently scale to long sequences owing to their subquadratic runtime complexity. Originally designed for continuous signals, SSMs have shown superior performance on a plethora of tasks, in vision and audio; however, SSMs still lag Transformer performance in Language Modeling tasks. In this work, we propose a hybrid layer named Block-State Transformer (BST), that internally combines an SSM sublayer for long-range contextualization, and a Block Transformer sublayer for short-term representation of sequences. We study three different, and completely parallelizable, variants that integrate SSMs and block-wise attention. We show that our model outperforms similar Transformer-based architectures on language modeling perplexity and generalizes to longer sequences. In addition, the Block-State Transformer demonstrates more than tenfold increase in speed at the layer level compared to the Block-Recurrent Transformer when model parallelization is employed.
http://arxiv.org/pdf/2306.09539
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
cs.CL, cs.LG
NeurIPS'23 - Thirty-seventh Conference on Neural Information Processing Systems
null
cs.CL
20230615
20231030
[ { "id": "1901.02860" } ]
2306.09093
5
To address the limitations of current multi-modal datasets that predominantly emphasize spe- cific task types, we create our MACAW-LLM instruction dataset, which is described in Section 4. This dataset covers a wide range of instructional tasks and combines various data modalities, making it more diverse and better-suited for multi-modal instruction-tuned LLMs. We utilize the remarkable generative capability of current LLMs, such as GPT-3.5-TURBO, to curate this dataset, ensuring the target text properly aligns with human instructions. Our contributions in this work can be summarized as follows: • We propose a novel architecture for multi-modal language modeling, which jointly learns to align multi-modal features and textual features and generate output sequence. • We release MACAW-LLM instruction dataset, a large-scale multi-modal instruction dataset that covers diverse instructional tasks leveraging image and video modalities, which facilitates future work on multi-modal LLMs. 2 RELATED WORK
2306.09093#5
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
Although instruction-tuned large language models (LLMs) have exhibited remarkable capabilities across various NLP tasks, their effectiveness on other data modalities beyond text has not been fully studied. In this work, we propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual, audio, and textual information. Macaw-LLM consists of three main components: a modality module for encoding multi-modal data, a cognitive module for harnessing pretrained LLMs, and an alignment module for harmonizing diverse representations. Our novel alignment module seamlessly bridges multi-modal features to textual features, simplifying the adaptation process from the modality modules to the cognitive module. In addition, we construct a large-scale multi-modal instruction dataset in terms of multi-turn dialogue, including 69K image instances and 50K video instances. We have made our data, code and model publicly available, which we hope can pave the way for future research in multi-modal LLMs and expand the capabilities of LLMs to handle diverse data modalities and address complex real-world scenarios.
http://arxiv.org/pdf/2306.09093
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI, cs.CV
Longyue Wang is the corresponding author. Our project page is at https://github.com/lyuchenyang/Macaw-LLM
null
cs.CL
20230615
20230615
[]
2306.09212
5
1The data and evaluation code are available at https://github.com/haonan-li/CMMLU 1 Under review ow a Actuatidl Science a enorneatno BZ, "Genetics: —Legaia fy cies pycroloay eo? mathematics 8p¢e¢ pa <OMaUpe ae eee ae. Virology a 00d Scienge 1S 0d oat Aiitics BRET ce HMA Age College, Aggect pasoHeasHN Computer sr Anatomy spy Clinic gy eyed cet Prva jeorning HRS) Hume Tofiong Mgt ig c chi mt ry lan Se, Meg, 9 co sol Physics Be AIRS ua Mey High Sel ney Agron city 4 eine ithematics Nom Atego Ey) Pi? aon MIN ETROOICHEMSHY Bityege SPorts Soap tg = leg poo BIOGY BEM Elometition se Abr se nxary Mathematics Aye tory on sc 8, ‘Biemnet Figure 1: CMMLU task overview. and wording. Furthermore, numerous tasks within CMMLU have answers specific to China, which may not be universally applicable or considered correct in other regions or languages.
2306.09212#5
CMMLU: Measuring massive multitask language understanding in Chinese
As the capabilities of large language models (LLMs) continue to advance, evaluating their performance becomes increasingly crucial and challenging. This paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese benchmark that covers various subjects, including natural science, social sciences, engineering, and humanities. We conduct a thorough evaluation of 18 advanced multilingual- and Chinese-oriented LLMs, assessing their performance across different subjects and settings. The results reveal that most existing LLMs struggle to achieve an average accuracy of 50%, even when provided with in-context examples and chain-of-thought prompts, whereas the random baseline stands at 25%. This highlights significant room for improvement in LLMs. Additionally, we conduct extensive experiments to identify factors impacting the models' performance and propose directions for enhancing LLMs. CMMLU fills the gap in evaluating the knowledge and reasoning capabilities of large language models within the Chinese context.
http://arxiv.org/pdf/2306.09212
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
cs.CL
null
null
cs.CL
20230615
20240117
[ { "id": "2302.13971" }, { "id": "2304.12986" }, { "id": "2307.00360" }, { "id": "2211.09110" }, { "id": "2305.08322" }, { "id": "2307.15020" }, { "id": "2307.09288" }, { "id": "2305.15011" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "2304.08177" }, { "id": "2305.10263" } ]
2306.09299
5
Figure 1: Overview of single-round of interaction between a teacher LLM and a student LLM, covering our first three research questions (with Fig. 6 showing RQ4, multi-round student-teacher interaction that builds on top of RQ1-RQ3). RQ1: The teacher randomly intervenes and communicates explanations to improve the student’s performance (right part). RQ2: The teacher decides when to intervene by computing Expected Utility of Intervention using the Pre-Intervention and Post-Intervention Simulation prompts (middle part). RQ3: The teacher communicates personalized explanations that are more helpful for the student (left part). hypothetical people in classical theory-of-mind tests [4].2 However, we do not yet know how well language models can teach other agents to solve reasoning tasks via explanations.
2306.09299#5
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
A hallmark property of explainable AI models is the ability to teach other agents, communicating knowledge of how to perform a task. While Large Language Models perform complex reasoning by generating explanations for their predictions, it is unclear whether they also make good teachers for weaker agents. To address this, we consider a student-teacher framework between two LLM agents and study if, when, and how the teacher should intervene with natural language explanations to improve the student's performance. Since communication is expensive, we define a budget such that the teacher only communicates explanations for a fraction of the data, after which the student should perform well on its own. We decompose the teaching problem along four axes: (1) if teacher's test time intervention improve student predictions, (2) when it is worth explaining a data point, (3) how the teacher should personalize explanations to better teach the student, and (4) if teacher explanations also improve students on future unexplained data. We first show that teacher LLMs can indeed intervene on student reasoning to improve their performance. Next, inspired by the Theory of Mind abilities of effective teachers, we propose building two few-shot mental models of the student. The first model defines an Intervention Function that simulates the utility of an intervention, allowing the teacher to intervene when this utility is the highest and improving student performance at lower budgets. The second model enables the teacher to personalize explanations for a particular student and outperform unpersonalized teachers. We also demonstrate that in multi-turn interactions, teacher explanations generalize and learning from explained data improves student performance on future unexplained data. Finally, we verify that misaligned teachers can lower student performance to random chance by intentionally misleading them.
http://arxiv.org/pdf/2306.09299
Swarnadeep Saha, Peter Hase, Mohit Bansal
cs.CL, cs.AI, cs.LG
NeurIPS 2023 (23 pages, 12 figures). Our code is available at https://github.com/swarnaHub/ExplanationIntervention
null
cs.CL
20230615
20231114
[ { "id": "2302.13971" }, { "id": "2007.12248" }, { "id": "2204.02311" }, { "id": "2302.08399" }, { "id": "2304.05489" }, { "id": "2304.11490" }, { "id": "2210.11416" }, { "id": "2110.14168" }, { "id": "2212.10071" }, { "id": "1702.08608" }, { "id": "2302.02083" }, { "id": "2301.12726" }, { "id": "2112.04359" }, { "id": "1503.02531" }, { "id": "2010.04119" }, { "id": "2303.12712" }, { "id": "2212.08410" }, { "id": "2303.17651" }, { "id": "2212.09721" }, { "id": "2305.11426" }, { "id": "2305.14763" } ]