id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
2309.16609#237
Qwen Technical Report
GPT-3.5 # å ¨è¶ å¸ ç æ°´æ å ºï¼ å° æ ç å °äº ä¸ ç çº¢è ¹æ å ä¸ ç ç»¿è ¹æ ã ä» æ ¿ç 两ç è ¹æ èµ°å °æ ¶é ¶å °å æ é ç» è´¦ã # æ ¶é ¶å ç ç å° æ æ é ç 两ç è ¹æ è¯´ï¼ â è¿ ä½ å° æ å ï¼ ä½ ç ¥é çº¢è ¹æ å ç»¿è ¹æ æ ä» ä¹ å ºå «å ï¼ â # å° æ æ ³äº ä¸ ä¸ ï¼ ç ¶å ç¬ ç å ç ï¼ â å½ ç ¶ç ¥é äº ï¼ çº¢è ¹æ æ ¯æ ä¾£ï¼ ç»¿è ¹æ æ ¯å 身ç ã â # æ ¶é ¶å è¢«å° æ ç å ç é ä¹ äº ï¼ å¿ ä¸ ä½ ç¬ äº å ºæ ¥ã
2309.16609#236
2309.16609#238
2309.16609
[ "2305.20050" ]
2309.16609#238
Qwen Technical Report
46 964 In the supermarketâ s fruit section, Xiaoming saw a box of red apples and a box of green apples. He held the two boxes of apples and walked to the cash register to queue up for payment. The cashier looked at the two boxes of apples in Xiaomingâ s hands and said, â Hey, little friend, do you know the difference between red apples and green apples?â Xiaoming thought momentarily and then smiled, â Of course, I know! Red apples are couples, and green apples are single dogs.â
2309.16609#237
2309.16609#239
2309.16609
[ "2305.20050" ]
2309.16609#239
Qwen Technical Report
Xiaomingâ s answer amused the cashier, and couldnâ t help but laugh. GPT-4 â ä¸ ä¸ªé 常å è ¯ç ç ·å ©ï¼ å ¨è¶ å¸ é å ç °äº ä¸ ä¸ªè å¤ªå¤ªå ¨è ¹æ å é æ é è ¹æ ã ä» ç å °è å¤ªå¤ªä¼¼ä¹ æ ¾ä¸ å °æ»¡æ ç è ¹æ ï¼ å°±èµ° ä¸ å å »å¸®å¿ ã ç ·å ©è¯´ï¼ â è å¥¶å¥¶ï¼ è¿ ç®±å ç è ¹æ æ ä¹ å é ½ç è¿ äº ï¼ é é ¢æ²¡ æ ä¸ ä¸ªå¥½ç ã ä½ æ ¯æ ç ¥é æ å °ä¸ æ ä¸ ä¸ªæ ä¿ ç ç æ 好ç è ¹ æ ï¼ æ å »ä¸ºæ ¨æ ¿è¿ æ ¥ã â è 太太æ å ¨å °ç ¹ç ¹å¤´ï¼ ç ·å ©å ´å ²å ²å °è· å »æ ¿è ¹æ ï¼ å ªè§ ä» ç¬ å µå µå °æ ¿è¿ ä¸ ä¸ªå·¨å¤§ç ï¼ è ³ä¸½ç è ¹æ æ ¾å ¨è å¤ªå¤ªé ¢å ã è å¤ªå¤ªç æ å °ç ç ç
2309.16609#238
2309.16609#240
2309.16609
[ "2305.20050" ]
2309.16609#240
Qwen Technical Report
·å ©ï¼ å ç ç æ ä¸ ç â è ¹æ â ï¼ ç ¶å å¾®ç¬ ç è¯´ï¼ â å ©å ï¼ å¾ æ 谢你ç 好æ ï¼ ä½ è¿ æ ¯ä¸ ä¸ªç ªè ã â â A very kind boy found an old lady picking apples in a pile at the supermarket. Seeing that she could not find a satisfactory apple, he approached to help.
2309.16609#239
2309.16609#241
2309.16609
[ "2305.20050" ]
2309.16609#241
Qwen Technical Report
The boy said, â Granny, Iâ ve checked all the apples in this box before, and there isnâ t a good one. But I know I saved an excellent apple on the counter for you. Let me get it for you.â The old lady nodded gratefully, and the boy eagerly ran to get the apple. He joyfully handed a huge, vibrant apple to the old lady. Confused, the old lady looked at the boy, then at the â appleâ in her hand, and smiled. She said, â Child, thank you for your kindness, but this is a tomato.â
2309.16609#240
2309.16609#242
2309.16609
[ "2305.20050" ]
2309.16609#242
Qwen Technical Report
1144 Mathematics ä¸ ä¸ªæ £æ ¹å½¢ï¼ è¢«å æ 5ä¸ªç ¸ç ç é ¿æ ¹å½¢ï¼ æ¯ ä¸ªé ¿æ ¹å½¢ç å ¨é ¿æ ¯60å ç±³ï¼ æ £æ ¹å½¢ç å ¨é ¿æ ¯å¤ å° å ç±³ A square is divided into five equal rectangles, each with a perimeter of 60 centimeters. What is the perimeter of the square in centimeters?
2309.16609#241
2309.16609#243
2309.16609
[ "2305.20050" ]
2309.16609#243
Qwen Technical Report
# Responses Model Qwen-7B-Chat (SFT) è®¾æ £æ ¹å½¢ç è¾¹é ¿ä¸ºxï¼ é £ä¹ æ¯ ä¸ªé ¿æ ¹å½¢ç è¾¹é ¿å°±æ ¯x/5ã æ ¹æ ® Response é¢ æ ï¼ æ ä»¬ç ¥é æ¯ ä¸ªé ¿æ ¹å½¢ç å ¨é ¿æ
2309.16609#242
2309.16609#244
2309.16609
[ "2305.20050" ]
2309.16609#244
Qwen Technical Report
¯60å ç±³ï¼ æ 以æ ï¼ 2(x/5) + 2(x/5) + 2(x/5) + 2(x/5) = 60 å ç® å¾ ï¼ x = 60/(2 + 2 + 2 + 2) = 15 æ ä»¥æ £æ ¹å½¢ç å ¨é ¿æ ¯4x = 4 â 15 = 60å ç±³ If the side length of a square is x, then the side length of each rectangle is x/5. According to the question, we know that the perimeter of each rectangle is 60 centimeters, so we have: 2(x/5) + 2(x/5) + 2(x/5) + 2(x/5) = 60 Simplifying, we get: x = 60/(2 + 2 + 2 + 2) = 15 So the perimeter of the square is 4x = 4 â 15 = 60 centimeters.
2309.16609#243
2309.16609#245
2309.16609
[ "2305.20050" ]
2309.16609#245
Qwen Technical Report
Elo 976 47 Qwen-14B-Chat (SFT) è®¾æ £æ ¹å½¢è¾¹é ¿ä¸ºxï¼ é ¿æ ¹å½¢è¾¹é ¿ä¸ºy æ ¹æ ®é¢ æ ï¼ æ ï¼ x = 5y 2y + 2y = 60 4y = 60 y = 15 x = 5 â 15 = 75 æ £æ ¹å½¢å ¨é ¿= 4x = 4 â
2309.16609#244
2309.16609#246
2309.16609
[ "2305.20050" ]
2309.16609#246
Qwen Technical Report
75 = 300 Let the side length of the square be x and the length of the rectangle be y. According to the problem: x = 5y 2y + 2y = 60 4y = 60 y = 15 x = 5 â 15 = 75 Perimeter of the square = 4x = 4 â 75 = 300 Qwen-14B-Chat (RLHF) é¦ å ï¼ æ ä»¬è®¾æ £æ ¹å½¢ç è¾¹é ¿ä¸ºxï¼ å æ¯ ä¸ªé ¿æ ¹å½¢ç é ¿ä¸ºxï¼ å®½ 为x/5ã æ ¹æ ®é ¿æ ¹å½¢ç å ¨é ¿å ¬å¼ ï¼ å ¯å¾ æ¯ ä¸ªé ¿æ ¹å½¢ç å ¨é
2309.16609#245
2309.16609#247
2309.16609
[ "2305.20050" ]
2309.16609#247
Qwen Technical Report
¿ä¸º2(x + x/5) = 12x/5ã å·²ç ¥æ¯ ä¸ªé ¿æ ¹å½¢ç å ¨é ¿ä¸º60å ç±³ï¼ ä»£å ¥ä¸ å¼ å¾ ï¼ 12x/5 = 60 è§£å¾ x = 25ã å æ ¤ï¼ æ £æ ¹å½¢ç å ¨é ¿ä¸º4x = 100å ç±³ã ç æ¡ ï¼ æ £æ ¹å½¢ç å ¨é ¿æ
2309.16609#246
2309.16609#248
2309.16609
[ "2305.20050" ]
2309.16609#248
Qwen Technical Report
¯100å ç±³ã First, letâ s assume that the squareâ s side length is x. Therefore, the length of each rectangle is x and the width is x/5. According to the formula for the perimeter of a rectangle, we can determine that the perimeter of each rectangle is 2(x + x/5) = 12x/5. Given that the perimeter of each rectangle is 60 centimeters, we can substitute this value into the equation: 12x/5 = 60 Solving for x, we find x = 25. Therefore, the perimeter of the square is 4x = 100 centimeters.
2309.16609#247
2309.16609#249
2309.16609
[ "2305.20050" ]
2309.16609#249
Qwen Technical Report
Answer: The perimeter of the square is 100 centimeters. GPT-3.5 è®¾æ £æ ¹å½¢ç è¾¹é ¿ä¸ºxå ç±³ï¼ å æ¯ ä¸ªé ¿æ ¹å½¢ç é ¿ä¸ºxå ç±³ï¼ å®½ 为5x/4å ç±³ã æ¯ ä¸ªé ¿æ ¹å½¢ç å ¨é
2309.16609#248
2309.16609#250
2309.16609
[ "2305.20050" ]
2309.16609#250
Qwen Technical Report
¿ä¸º2(x + 5x/4) = 60å ç±³ã å ç® å¾ 13x/2 = 60 è§£æ ¹ç¨ å¾ å °x = (60 â 2)/13 = 120/13H9.23 æ £æ ¹å½¢ç å ¨é ¿ä¸º4x = 4 â 9.23 â 36.92å ç±³ã If the side length of a square is x centimeters, then the length of each rectangle is x centimeters, and the width is 5x/4 centimeters. The perimeter of each rectangle is 2(x + 5x/4) = 60 centimeters. Simplifying, we get 13x/2 = 60. Solving the equation, we find x = (60 * 2) / 13 = 120 / 13 â
2309.16609#249
2309.16609#251
2309.16609
[ "2305.20050" ]
2309.16609#251
Qwen Technical Report
9.23. The perimeter of the square is 4x = 4 * 9.23 â 36.92 centimeters. GPT-4 976 1139 897 1010 GPT-4 ç ±é¢ æ å ¯ç ¥ï¼ æ¯ ä¸ªé ¿æ ¹å½¢ç ä¸ æ ¡é ¿è¾¹ç äº æ £æ ¹å½¢ç è¾¹é ¿ï¼ è®¾ 为aã è®¾æ¯ ä¸ªé ¿æ ¹å½¢ç ä¸ æ ¡ç 边为bï¼ é £ä¹ æ 5b = aã æ ¹æ ®æ¯ ä¸ªé ¿æ ¹å½¢ç å ¨é ¿å ¯å¾ å °2 â (a + b) = 60å ç±³ã å° 5b = a代 å ¥å °é ¿æ ¹å½¢ç å ¨é ¿å ¬å¼ ä¸ å¾ ï¼ 2 â a + 2 â 5b = 60ï¼ å ç® å¾ 2 â a + 10b = 60ï¼ ç ±äº a = 5bï¼ æ 以2 â a = 2 â 5b = 10b é £ä¹ æ
2309.16609#250
2309.16609#252
2309.16609
[ "2305.20050" ]
2309.16609#252
Qwen Technical Report
10b + 10b = 60ï¼ å¾ b = 3å ç±³ï¼ a = 5 â b = 15å ç±³ã æ ä»¥ï¼ æ £æ ¹å½¢ç å ¨é ¿æ ¯4 â a = 4 â 15 = 60å ç±³ã 1010 48 From the given problem, it can be inferred that the length of one side of each rectangle is equal to that of one side of the square, denoted as a. Let the length of one side of each rectangle be denoted as b; then, we have 5b = a. The perimeter of each rectangle can be obtained as 2 â (a + b) = 60 centimeters. Substituting 5b = a into the formula for the perimeter of the rectangle, we have 2a+25b = 60. Simplifying it gives 2a+10b = 60. Since a = 5b, it follows that 2a = 25b = 10b. Therefore, 10b + 10b = 60, which gives b = 3 centimeters and a = 5b = 15 centimeters. Thus, the perimeter of the square is 4a = 415 = 60 centimeters.
2309.16609#251
2309.16609#253
2309.16609
[ "2305.20050" ]
2309.16609#253
Qwen Technical Report
Mathematics å å ¦ä»¬è¿ è¡ å¹¿æ æ æ¯ èµ ï¼ å ¨ç æ £å¥½æ æ ç ¸ç ç 6è¡ ã å° çº¢æ å ¨ç¬¬äº è¡ ï¼ ä» å¤´æ °ï¼ å¥¹ç« å ¨ç¬¬5ä¸ªä½ ç½®ï¼ ä» å æ °å¥¹ç« å ¨ç¬¬3ä¸ªä½ ç½®ï¼ è¿ ä¸ªç å ±æ å¤ å°
2309.16609#252
2309.16609#254
2309.16609
[ "2305.20050" ]
2309.16609#254
Qwen Technical Report
人 The students are having a radio exercise competition, and the whole class is neatly arranged into 6 equal rows. Xiaohong is in the second row, counting from the front, she is in the 5th position. Counting from the back, she is in the 3rd position. How many people are there in this class? Responses Model Qwen-7B-Chat (SFT) å° çº¢å é ¢ç äººæ °= 5 - 1 = 4ï¼ å° çº¢å é ¢ç äººæ °= 3 - 1 = 2ï¼ å ¨ç äººæ °= å° çº¢å é ¢ç äººæ °+ å° çº¢å é ¢ç äººæ °+ å° çº¢è
2309.16609#253
2309.16609#255
2309.16609
[ "2305.20050" ]
2309.16609#255
Qwen Technical Report
ªå·±= 4 + 2 + 1 = 7 The number of people in front of Xiaohong = 5 - 1 = 4, the number of people behind Xiaohong = 3 - 1 = 2, the total number of people in the class = the number of people in front of Xiaohong + the number of people behind Xiaohong + Xiaohong herself = 4 + 2 + 1 = 7. Response Qwen-14B-Chat (SFT) 5+3-1=7ï¼
2309.16609#254
2309.16609#256
2309.16609
[ "2305.20050" ]
2309.16609#256
Qwen Technical Report
äººï¼ 7à 6=42ï¼ äººï¼ è¿ ä¸ªç å ±æ 42人ã 5+3-1=7 (people) 7à 6=42 (people) There are a total of 42 people in this class. Qwen-14B-Chat (RLHF) å° çº¢ å é ¢ æ 4个 人 ï¼ å é ¢ æ 2个 人 ï¼ æ 以 è¿ ä¸ è¡ å ± æ 4+2+1=7个人ã å ¨ç å ±æ 6è¡ ï¼ æ ä»¥å ¨ç å ±æ 6*7=42个人ã
2309.16609#255
2309.16609#257
2309.16609
[ "2305.20050" ]
2309.16609#257
Qwen Technical Report
There are 4 people in front of Xiaohong and 2 people behind her, so there are a total of 4+2+1=7 people in this row. There are a total of 6 rows in the class, so there are a total of 6*7=42 people in the whole class. GPT-3.5 æ ¹æ ®é¢ ç ®ä¿¡æ ¯ï¼ å° çº¢æ å ¨ç¬¬äº è¡ ç¬¬5ä¸ªä½ ç½®ï¼ ä¹ å ³ç¬¬11ä¸ªä½ ç½®ã è å° çº¢ä» å æ °æ ¯ç¬¬3ä¸ªä½ ç½®ï¼ ä¹ å ³å æ °ç¬¬11ä¸ªä½ ç½®ã æ ä»¥è¿ ä¸ªç å ±æ 11个人ã
2309.16609#256
2309.16609#258
2309.16609
[ "2305.20050" ]
2309.16609#258
Qwen Technical Report
According to the information given, Xiaohong is in the second row, fifth position, which is also the 11th position. And Xiaohong is in the third position from the end, which is also the 11th position counting from the back. So there are a total of 11 people in this class. GPT-4 å¦ æ å° çº¢ ä» å¤´ æ ° æ ¯ 第5个 ï¼ ä» å æ ° æ ¯ 第3个 ï¼ é £ ä¹ è¿ ä¸ æ å°± æ 5+3-1=7人 ã å ¨ ç æ £ 好 æ æ 6è¡ ï¼ æ 以 ç 级 é æ 人 æ ° 为7*6=<<7*6=42>>=42人ã
2309.16609#257
2309.16609#259
2309.16609
[ "2305.20050" ]
2309.16609#259
Qwen Technical Report
49 Elo 976 976 1139 897 1010 If Xiaohong is the 5th person when counting from the front and the 3rd person when counting from the back, then there are a total of 5+3-1=7 people in this row. The whole class is perfectly arranged in 6 rows, so there are a total of 7*6=42 people in the class. Code pythonå® ç °ä¸ ä¸ªæ £å è¡¨è¾¾å¼ å ¹é ï¼ è¦ æ± è ½å ¹é å °æ æ 以139å¼ å¤´ï¼ ä»¥00ç» å°¾ç
2309.16609#258
2309.16609#260
2309.16609
[ "2305.20050" ]
2309.16609#260
Qwen Technical Report
11ä½ æ °å Implement a regular expression matching in Python, which should be able to match all 11-digit numbers that start with â 139â and end with â 00â . Responses Model Qwen-7B-Chat (SFT) ä½ å ¯ä»¥ä½¿ç ¨Pythonç reæ¨¡å æ ¥å® ç °è¿ ä¸ªæ £å è¡¨è¾¾å¼ å ¹é ã 以 Response ä¸ æ ¯ä¸ ä¸ªç¤ºä¾ ä»£ç ï¼
2309.16609#259
2309.16609#261
2309.16609
[ "2305.20050" ]
2309.16609#261
Qwen Technical Report
import re def match phone numbers(phone numbers): pattern = r â Ë 139d9$â return re . findall ( pattern , phone numbers) phone numbers = [â 13900000000â , â 13912345678â , â 13998765432â , â 13987654321â ] print (match phone numbers(phone numbers)) å ¨è¿ ä¸ªä»£ç ä¸ ï¼ æ ä»¬é¦ å å® ä¹ äº ä¸ ä¸ªæ £å è¡¨è¾¾å¼ æ¨¡å¼ â Ë 139 d9$â ï¼
2309.16609#260
2309.16609#262
2309.16609
[ "2305.20050" ]
2309.16609#262
Qwen Technical Report
è¿ ä¸ªæ¨¡å¼ è¡¨ç¤ºä»¥139å¼ å¤´ï¼ ä»¥00ç» å°¾ï¼ ä¸ é ¿åº¦ä¸º11ç æ °å ã ç ¶å æ ä»¬ä½¿ç ¨re . findall å ½æ °æ ¥æ ¥æ ¾æ æ å ¹é è¿ ä¸ªæ¨¡ å¼ ç å ç¬¦ä¸²ï¼ æ å è¿ å è¿ äº å ¹é ç å ç¬¦ä¸²ã æ³¨æ ï¼ è¿ ä¸ªä»£ç å 设phone numbersæ ¯ä¸ ä¸ªå å «å¤ ä¸ªå 符串ç å 表ã å¦ æ ä½ ç
2309.16609#261
2309.16609#263
2309.16609
[ "2305.20050" ]
2309.16609#263
Qwen Technical Report
phone numbersæ ¯ä¸ ä¸ªå ç¬¦ä¸²ï¼ ä½ é è¦ å å° å ¶ è½¬æ ¢ä¸ºå 表ã You can use the re module in Python to implement this regular expression matching. Here is an example code: import re def match phone numbers(phone numbers): pattern = r â Ë 139d9$â return re . findall ( pattern , phone numbers) phone numbers = [â 13900000000â , â 13912345678â , â 13998765432â , â 13987654321â
2309.16609#262
2309.16609#264
2309.16609
[ "2305.20050" ]
2309.16609#264
Qwen Technical Report
] print (match phone numbers(phone numbers)) Elo 941 This code defines a regular expression pattern Ë 139d9$. This pat- tern represents a string that starts with â 139â , ends with â 00â , and has a length of 11 digits. Then, we use the re.findall function to find all the strings that match this pattern, and finally, we return these matching strings. Note that this code assumes that phone numbers is a list containing multiple strings. If your phone numbers is a string, you must first convert it to a list.
2309.16609#263
2309.16609#265
2309.16609
[ "2305.20050" ]
2309.16609#265
Qwen Technical Report
50 Qwen-14B-Chat (SFT) ä½ å ¯ä»¥ä½¿ç ¨Pythonç reæ¨¡å æ ¥å® ç °è¿ ä¸ªæ £å è¡¨è¾¾å¼ å ¹é ã 以 ä¸ æ ¯ä¸ ä¸ªç® å ç ä¾ å ï¼ 941 # import re # def match phone numbers(phone numbers): pattern = r â Ë 139d9$â
2309.16609#264
2309.16609#266
2309.16609
[ "2305.20050" ]
2309.16609#266
Qwen Technical Report
return [phone number for phone number in phone numbers if re .match( pattern , phone number)] phone numbers = [â 13912345678â , â 13923456789â , â 13934567890â , â 13945678901â , â 13956789012â , â 13967890123 â , â 13978901234â , â 13989012345â , â 13990123456â , â 13900000000â ] print (match phone numbers(phone numbers))
2309.16609#265
2309.16609#267
2309.16609
[ "2305.20050" ]
2309.16609#267
Qwen Technical Report
å ¨è¿ ä¸ªä¾ å ä¸ ï¼ æ ä»¬å® ä¹ äº ä¸ ä¸ªæ £å è¡¨è¾¾å¼ æ¨¡å¼ â Ë 139d9$â ï¼ å® å ¹é 以139å¼ å¤´ï¼ å é ¢è· ç 9ä¸ªä»»æ æ °å ï¼ æ å 以0ç» å°¾ç å 符串ã ç ¶å æ 们使ç
2309.16609#266
2309.16609#268
2309.16609
[ "2305.20050" ]
2309.16609#268
Qwen Technical Report
¨re .match()å ½æ °æ ¥æ£ æ ¥æ¯ ä¸ªç µè¯ å ·ç æ ¯å ¦å ¹é è¿ ä¸ªæ¨¡å¼ ã å¦ æ å ¹é ï¼ æ ä»¬å°±å° è¿ ä¸ªç µè¯ å ·ç æ·»å å °ç» æ å è¡¨ä¸ ã You can use the re module of Python to implement this regular expression matching.
2309.16609#267
2309.16609#269
2309.16609
[ "2305.20050" ]
2309.16609#269
Qwen Technical Report
Here is a simple example: # import re def match phone numbers(phone numbers): pattern = r â Ë 139d9$â return [phone number for phone number in phone numbers if re .match( pattern , phone number)] phone numbers = [â 13912345678â , â 13923456789â , â 13934567890â , â 13945678901â , â 13956789012â , â 13967890123 â , â 13978901234â , â 13989012345â , â 13990123456â , â 13900000000â
2309.16609#268
2309.16609#270
2309.16609
[ "2305.20050" ]
2309.16609#270
Qwen Technical Report
] print (match phone numbers(phone numbers)) In this example, we define a regular expression pattern â Ë 139d9$â , which matches a string starting with 139, followed by nine arbitrary digits and ending with 0. Then we use the re .match() function to check if each phone number matches this pattern. If it does, we add this phone number to the result list. 51 Qwen-14B-Chat (RLHF) å ¯ä»¥ä½¿ç
2309.16609#269
2309.16609#271
2309.16609
[ "2305.20050" ]
2309.16609#271
Qwen Technical Report
¨Pythonç reæ¨¡å æ ¥å® ç °æ £å è¡¨è¾¾å¼ å ¹é ã ä»¥ä¸ æ ¯ä¸ ä¸ªç¤ºä¾ ä»£ç ï¼ 1090 # import re # pattern = r â Ë 139d9$â # æµ è¯ å ¹é text = â 13912345678â match = re .match( pattern , if match: text ) print ( â
2309.16609#270
2309.16609#272
2309.16609
[ "2305.20050" ]
2309.16609#272
Qwen Technical Report
å ¹é æ å â ) else : print ( â å ¹é 失败â ) # æµ è¯ ä¸ å ¹é text = â 139123456780â match = re .match( pattern , if match: text ) print ( â å ¹é æ å â ) else : print ( â å ¹é 失败â ) å ¶ä¸ ï¼ Ë è¡¨ç¤ºå ¹é å 符串ç å¼ å¤´ï¼ dè¡¨ç¤ºå ¹é æ °å ï¼ 9è¡¨ç¤ºå ¹ é 9ä¸ªæ °å ï¼ $è¡¨ç¤ºå ¹é å 符串ç ç» å°¾ã å°
2309.16609#271
2309.16609#273
2309.16609
[ "2305.20050" ]
2309.16609#273
Qwen Technical Report
è¿ äº ç¬¦å ·ç» å å ¨ä¸ èµ·ï¼ å°±å ¯ä»¥å¾ å °ä¸ ä¸ªå ¹é 11ä½ æ °å ç æ £å è¡¨è¾¾å¼ ã å ¨æµ è¯ å ¹é æ ¶ï¼ ä½¿ç ¨re .match()å ½æ °æ ¥å° è¯ å ¹é æ æ
2309.16609#272
2309.16609#274
2309.16609
[ "2305.20050" ]
2309.16609#274
Qwen Technical Report
¬ï¼ å¦ æ å ¹é æ å ï¼ è¿ å ä¸ ä¸ªMatchå¯¹è±¡ï¼ å ¦å è¿ å Noneã å ¨æ ¬ä¾ ä¸ ï¼ ç¬¬ä¸ ä¸ªæµ è¯ å ¹é æ å ï¼ ç¬¬äº ä¸ªæµ è¯ å ¹é 失败ã You can use the re module in Python to implement regular expres- sion matching.
2309.16609#273
2309.16609#275
2309.16609
[ "2305.20050" ]
2309.16609#275
Qwen Technical Report
Here is an example code: # import re pattern = r â Ë 139d9$â # Match Test text = â 13912345678â match = re .match( pattern , if match: text ) print ( â Successâ ) else : print ( â Failâ ) # Unmatch Test text = â 139123456780â match = re .match( pattern , if match: text ) print ( â Successâ ) else : print ( â Failâ ) Among them, Ë
2309.16609#274
2309.16609#276
2309.16609
[ "2305.20050" ]
2309.16609#276
Qwen Technical Report
represents matching the beginning of a string, d represents matching a digit, 9 represents matching 9 digits, and $ represents matching the end of a string. Combining these symbols, we can obtain a regular expression that matches 11 digits. When testing for a match, use the re .match() function to attempt to match the text. A successful match returns a Match object; otherwise, it returns None. In this example, the first test matches successfully, while the second test fails to match.
2309.16609#275
2309.16609#277
2309.16609
[ "2305.20050" ]
2309.16609#277
Qwen Technical Report
52 GPT-3.5 ä½¿ç ¨Pythonç re模å å ¯ä»¥å¾ æ ¹ä¾¿å °å® ç °æ £å è¡¨è¾¾å¼ å ¹é ã 以 ä¸ æ ¯ä¸ ä¸ªå ¹é æ æ 以139å¼ å¤´ï¼ ä»¥00ç» å°¾ç 11ä½ æ °å ç ç¤ºä¾ ä»£ç ï¼
2309.16609#276
2309.16609#278
2309.16609
[ "2305.20050" ]
2309.16609#278
Qwen Technical Report
1043 # import re text = â 139123456700â pattern = râ Ë 139d800$â # match = re .match( pattern , if match: # text ) # print (â å ¹é æ å â ) # else : # print (â å ¹é 失败â ) è¾ å ºç» æ ä¸ºï¼ å ¹é æ å 代ç ä¸ ï¼ é¦ å å® ä¹ äº ä¸ ä¸ªæ æ ¬å 符串 text ï¼ ç ¨äº æµ è¯ æ £å 表 è¾¾å¼ ç å ¹é ã æ ¥ç å® ä¹ äº ä¸ ä¸ªæ £å è¡¨è¾¾å¼ pattern ï¼ å ¶ä¸ Ë 139 表示以139å¼ å¤´ï¼ d8表示任æ 8ä½ æ °å ï¼ 00$表示以00ç» å°¾ã ç ¶ å 使 ç ¨re .match()å ½ æ ° è¿ è¡ å ¹ é ï¼ å¦ æ å ¹ é æ å ï¼ å è¾ å ºâ å ¹é æ å â ï¼ å ¦å è¾ å ºâ å ¹é 失败â ã
2309.16609#277
2309.16609#279
2309.16609
[ "2305.20050" ]
2309.16609#279
Qwen Technical Report
The re module in Python can conveniently accomplish regular expression matching. Below is an example code that matches all 11-digit numbers beginning with 139 and ending with 00. # import re text = â 139123456700â pattern = râ Ë 139d800$â match = re .match( pattern , if match: text ) print (â Successâ ) else : print (â Failâ ) Output: Success In the code, a text string â textâ is first defined for testing regular expression matching. Then, a regular expression â patternâ is de- fined, where â
2309.16609#278
2309.16609#280
2309.16609
[ "2305.20050" ]
2309.16609#280
Qwen Technical Report
Ë 139â means starting with 139, â d8â means any eight digits, and â 00$â means ending with 00. Then, the re .match() function is used for matching. If the match is successful, â Successâ is outputted; otherwise, â Failâ is outputted. 53 GPT-4 ä¸ é ¢æ ¯ä½ é è¦ ç python代ç ï¼ è¿ æ®µä»£ç ä¸ ä½¿ç ¨äº
2309.16609#279
2309.16609#281
2309.16609
[ "2305.20050" ]
2309.16609#281
Qwen Technical Report
pythonç re模 å ã import re def match phone number(string) : pattern = â Ë 139d600$â return re .match( pattern , string ) is not None # æµ è¯ print (match phone number(â 13912345600â )) # è¾ å º: True print (match phone number(â 15912345600â )) # è¾ å º: False print (match phone number(â 13912345601â )) # è¾ å º:
2309.16609#280
2309.16609#282
2309.16609
[ "2305.20050" ]
2309.16609#282
Qwen Technical Report
False print (match phone number(â 139123456000â )) # è¾ å º: False å ¨è¿ é ï¼ â Ë 139d600$â æ ¯æ £å è¡¨è¾¾å¼ ã â Ë â è¡¨ç¤ºå ¹é å 符串ç å¼ å§ ï¼ â 139â å ¹ é å 符 串 ä¸ ç â 139â ï¼ â d6â å ¹ é ä»» æ å 个 æ ° å ï¼ â 00â å ¹é å ç¬¦ä¸²ä¸ ç â 00â ï¼ â $â è¡¨ç¤ºå ¹é å 符串ç ç» æ ã å æ ¤ï¼ è¿ ä¸ªæ £å è¡¨è¾¾å¼ å ¯ä»¥å ¹é å °æ æ 以139å¼ å¤´ï¼ ä»¥00ç» å°¾ç
2309.16609#281
2309.16609#283
2309.16609
[ "2305.20050" ]
2309.16609#283
Qwen Technical Report
11ä½ æ °å ã Below is the python code you need. This code uses the python re module. import re def match phone number(string) : pattern = â Ë 139d600$â return re .match( pattern , string ) is not None # Test print (match phone number(â 13912345600â )) # Output: True print (match phone number(â 15912345600â )) # Output: False print (match phone number(â 13912345601â )) # Output: False print (match phone number(â 139123456000â )) # Output: False 1134 In here, â Ë 139d600$â is a regular expression. â
2309.16609#282
2309.16609#284
2309.16609
[ "2305.20050" ]
2309.16609#284
Qwen Technical Report
Ë â represents the beginning of the string, â 139â matches the â 139â in the string, â d6â matches any six digits, â 00â matches the â 00â in the string, and â $â represents the end of the string. Therefore, this regular expression can match any 11-digit number that starts with 139 and ends with 00. Code æ ç ¨ä¸ é ¢ä»£ç æ± äº å æ ç æ å¤§æ·±åº¦ï¼ ä½ æ ¯ç æ¡ ä¸ å¯¹ï¼ æ ä¹ å äº å ¢ï¼
2309.16609#283
2309.16609#285
2309.16609
[ "2305.20050" ]
2309.16609#285
Qwen Technical Report
d e f maxDepth ( s e l f , r o o t : TreeNode ) â > i n t : i f n o t queue , w h i l e q u e u e : tmp = [ ] f o r node i n q u e u e : r o o t : r e t u r n 0 r e s = [ r o o t ] , 0 i f node . l e f t : i f node . r i g h t : tmp . a p p e n d ( node . l e f t ) tmp . a p p e n d ( node . l e f t ) q u e u e = tmp r e s += 1 r e t u r n r e s
2309.16609#284
2309.16609#286
2309.16609
[ "2305.20050" ]
2309.16609#286
Qwen Technical Report
I use the following code to find the maximum depth of a binary tree, but the answer is incorrect. What could be the problem? 54 , d e f maxDepth ( s e l f , r o o t : TreeNode ) â > i n t : i f n o t queue , w h i l e q u e u e : tmp = [ ] f o r node i n q u e u e : r o o t : r e t u r n 0 r e s = [ r o o t ] , 0 i f node . l e f t : i f node . r i g h t : tmp . a p p e n d ( node . l e f t ) tmp . a p p e n d ( node . l e f t ) q u e u e = tmp r e s += 1 r e t u r n r e s Responses Model Qwen-7B-Chat (SFT) ä½ ç ä»£ç æ ¯æ £ç¡®ç ï¼ ç æ¡ æ ¯æ £ç¡®ç ã å ¨ä½ ç 代ç ä¸ ï¼ ä½ é¦ å å° æ ¹è ç ¹æ·»å å °é å ä¸ ï¼ ç ¶å è¿ å ¥ä¸ ä¸ªå¾ªç ¯ï¼ ç ´å °é å 为 空ã å ¨æ¯ æ¬¡å¾ªç ¯ä¸ ï¼ ä½ é ½ä¼ å° é å ä¸ ç æ æ è ç ¹ç
2309.16609#285
2309.16609#287
2309.16609
[ "2305.20050" ]
2309.16609#287
Qwen Technical Report
å·¦å è ç ¹ å å ³å è ç ¹æ·»å å °é å ä¸ ã ç ¶å ï¼ ä½ å¢ å æ·±åº¦è®¡æ °å ¨resã æ å ï¼ ä½ è¿ å æ·±åº¦è®¡æ °å ¨resã Your code is correct, the answer is correct. In your code, you first add the root node to the queue, then enter a loop until the queue is empty. In each loop, you add the left child node and right child node of all nodes in the queue to the queue.
2309.16609#286
2309.16609#288
2309.16609
[ "2305.20050" ]
2309.16609#288
Qwen Technical Report
Then, you increase the depth counter res. Finally, you return the depth counter res. Response Qwen-14B-Chat (SFT) ä½ ç 代ç ä¼ è¿ å äº å æ ç è ç ¹æ °é ï¼ è ä¸ æ ¯æ 大深度ã è¿ æ ¯ å ä¸ºä½ å ¨æ¯ æ¬¡å¾ªç ¯ä¸ é ½å¢ å äº resç å ¼ï¼ è ä¸ æ ¯å ªå ¨æ ¾å °æ ° ç æ æ·±è ç ¹æ ¶å¢ å ã ä½ å ¯ä»¥é
2309.16609#287
2309.16609#289
2309.16609
[ "2305.20050" ]
2309.16609#289
Qwen Technical Report
è¿ å ¨æ ¾å °æ °ç æ æ·±è ç ¹æ ¶æ å¢ å resç å ¼æ ¥ä¿®å¤ è¿ ä¸ªé ®é¢ ã ä»¥ä¸ æ ¯ä¿®å¤ å ç 代ç ï¼ def maxDepth(self, root : TreeNode) â > int: if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp if len(queue) > res : res = len(queue) return res Your code will return the number of nodes in the binary tree, not the maximum depth.
2309.16609#288
2309.16609#290
2309.16609
[ "2305.20050" ]
2309.16609#290
Qwen Technical Report
This is because you increase the value of â resâ in every loop, instead of only increasing it when a new deepest node is found. You can fix this problem by increasing the value of â resâ only when a new deepest node is found. Here is the modified code: def maxDepth(self, root : TreeNode) â > int: Elo 876 949 if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp if len(queue) > res : res = len(queue) # return res
2309.16609#289
2309.16609#291
2309.16609
[ "2305.20050" ]
2309.16609#291
Qwen Technical Report
55 Qwen-14B-Chat (RLHF) è¿ ä¸ªä»£ç ç é ®é¢ å ¨äº å® æ²¡æ æ £ç¡®å °å¤ ç å·¦å ³å æ ç æ·±åº¦ã å · ä½ æ ¥è¯´ï¼ æ¯ æ¬¡ä» é å ä¸ å å ºä¸ ä¸ªè ç ¹æ ¶ï¼ å® ä¼ å° è¯¥è ç ¹ç å·¦ å ³å æ é ½å å ¥å °é å ä¸ ï¼ ç ¶å å° é å é ¿åº¦å ä¸ ã ä½ æ ¯ï¼ å¦ æ 该è ç ¹ç
2309.16609#290
2309.16609#292
2309.16609
[ "2305.20050" ]
2309.16609#292
Qwen Technical Report
å·¦å ³å æ ä¸ æ ä¸ ä¸ªä¸ºç©ºï¼ é £ä¹ è¿ ä¸ªè ç ¹ç æ·±åº¦å°±åº è¯¥æ ¯å® æ å ¨é å ç é ¿åº¦å ä¸ ï¼ è ä¸ æ ¯é å é ¿åº¦å ä¸ ã ä»¥ä¸ æ ¯ä¿®æ ¹å ç 代ç ï¼
2309.16609#291
2309.16609#293
2309.16609
[ "2305.20050" ]
2309.16609#293
Qwen Technical Report
def maxDepth(self, root : TreeNode) â > int: if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res = max(res, len(queue)) return res å ¨è¿ ä¸ªä»£ç ä¸ ï¼ æ¯ æ¬¡å å ºä¸ ä¸ªè ç ¹æ ¶ï¼ æ ä»¬å ªå° è¯¥è ç ¹ç å·¦ å ³å æ ä¸ é 空ç é ¨å å å ¥å °é å ä¸ ï¼ å¹¶è®¡ç® é å ç é ¿åº¦ã ç ¶ å ï¼ æ ä»¬å° é å é ¿åº¦ä¸ å½ å æ å¤§æ·±åº¦è¿ è¡ æ¯ è¾ ï¼ å¹¶æ ´æ °æ 大 深度ã
2309.16609#292
2309.16609#294
2309.16609
[ "2305.20050" ]
2309.16609#294
Qwen Technical Report
è¿ æ ·å°±å ¯ä»¥æ £ç¡®å °è®¡ç® å ºäº å æ ç æ å¤§æ·±åº¦äº ã The problem with this code is that it does not correctly handle the depth of the left and right subtrees. Specifically, when a node is taken out from the queue, it adds both its left and right subtrees to the queue and increases the length of the queue by one. However, if either the left or right subtree of the node is empty, then the depth of this node should be the length of the queue minus one, not the length of the queue plus one. Here is the modified code: def maxDepth(self, root :
2309.16609#293
2309.16609#295
2309.16609
[ "2305.20050" ]
2309.16609#295
Qwen Technical Report
TreeNode) â > int: if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res = max(res, len(queue)) return res In this code, when each node is extracted, we only add the non- empty parts of its left and right subtrees to the queue, and calculate the length of the queue. Then, we compare the length of the queue with the current maximum depth and update the maximum depth. This way, we can correctly calculate the maximum depth of the binary tree.
2309.16609#294
2309.16609#296
2309.16609
[ "2305.20050" ]
2309.16609#296
Qwen Technical Report
56 é ®é¢ å ºå ¨ç¬¬11è¡ ä»£ç ï¼ å ¨å° node. right æ·»å å °ttmpå è¡¨æ ¶ï¼ åº è¯¥ æ ¯tmp.append(node. right )è ä¸ æ ¯tmp.append(node. left ) ã ä¿®æ ¹å ç 代ç å¦ ä¸ ï¼ def maxDepth(self, root : TreeNode) â > int: if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res += 1 return res The problem lies in line 11 of the code. When adding node. right to the tmp list, it should be tmp.append(node. right ) instead of tmp .append(node. left ). The modified code is as follows: def maxDepth(self, root : TreeNode) â > int: if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res += 1 return res ä½ ç 代ç ä¸ æ ä¸ ä¸ªé è¯¯ï¼ ä½ å ¨è¿ ä¸ è¡ ä»£ç
2309.16609#295
2309.16609#297
2309.16609
[ "2305.20050" ]
2309.16609#297
Qwen Technical Report
if node. right : tmp.append(node. left ) ä¸ åº è¯¥æ ¯æ ³æ å ³å è ç ¹å å ¥å °tmpå è¡¨ä¸ ï¼ ä½ æ ¯ä½ å ç æ ¯node. left ï¼ è¿ æ ·å°±å ªä¼ æ å·¦å è ç ¹å å ¥å °tmpå è¡¨ä¸ ã æ £ç¡®ç åº è¯¥æ ¯node. right ã æ £ç¡®ç 代ç å¦ ä¸ ï¼
2309.16609#296
2309.16609#298
2309.16609
[ "2305.20050" ]
2309.16609#298
Qwen Technical Report
def maxDepth(self, root : TreeNode) â > int: if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res += 1 return res è¿ ç§ è§£æ³ ç æ è·¯æ ¯ï¼ æ¯ æ¬¡è¿ ä»£æ ¶ï¼ é ½æ å½ å å± ç æ æ è ç ¹ é å ä¸ é ï¼ å æ ¶æ ä¸ ä¸ å± ç æ æ è ç ¹é ½å å °ä¸ ä¸ æ¬¡è¿ ä»£ç é å ä¸ ã æ ä»¥æ¯ æ¬¡è¿ ä»£å® ï¼ æ·±åº¦reså°±å
2309.16609#297
2309.16609#299
2309.16609
[ "2305.20050" ]
2309.16609#299
Qwen Technical Report
1ã ç ´å °é å ç é å 为 ç©ºï¼ æ ¤æ ¶ç æ·±åº¦å°±æ ¯äº å æ ç æ 大深度ã 1090 1095 GPT-3.5 # GPT-4 57 There is an error in your code: In this line of code if node. right : tmp.append(node. left ), you should be adding the right child node to the tmp list, but you wrote node. left instead, so only the left child node will be added to the tmp list. The correct code should be node. right . The correct code is as follows: def maxDepth(self, root : TreeNode) â > int: if not root : return 0 queue, res = [ root ], 0 while queue: tmp = [] for node in queue: if node. left : tmp.append(node. left ) if node. right : tmp.append(node. right ) queue = tmp res += 1 return res The idea of this solution is: in each iteration, traverse all the nodes in the current level, and add all the nodes in the next level to the queue for the next iteration. So after each iteration, the depth res is increased by 1. Continue this process until the queue for traversal is empty, and the depth at that time is the maximum depth of the binary tree.
2309.16609#298
2309.16609#300
2309.16609
[ "2305.20050" ]
2309.16609#300
Qwen Technical Report
A.3 ANALYSIS OF CODE INTERPRETER Here we provide a case of comparison between CODE LLAMA and QWEN-CHAT. This case demonstrates the advantages of QWEN-CHAT in processing tabular data and performing complex tasks. 58 r Question: [Uploaded scatter_data.csv] Create a scatter plot with different size and color settings Qwen Great, you've uploaded a file named â scatter_data.csv", This appears to be a CSV file, which is co mmonly used for storing tabular data, We can use this data to create a scatter plot with different size and color settings.
2309.16609#299
2309.16609#301
2309.16609
[ "2305.20050" ]
2309.16609#301
Qwen Technical Report
Thought: Firstly, | need to use the code interpreter API to load the data and taking a look at the fi rst few rows to understand what we're working with. Action: code_interpreter Action Input: py © Copy code import pandas as pd # Load the data df = pd.read_csv("scatter_data.csv") # Display the first few rows of the dataframe df.head() Observation: execute_result: © Copy code x y sizes colors @ 7.577257 1.952430 46.433866 24.87@965 1 #4,87362@ 2.574614 38.541454 21.586782 2 4.192995 5.256499 60.999636 23.4@4101 3 0.273015 3.678973 63.538505 5.963053 4 3.445224 2.462327 59.921846 27.207964 Thought: Now | understand the data format. | need to use the code_interpreter API to run the co de Action: code_interpreter [ user huston [Uploaded scatter_data.csv] Create a scatter plot with different size and color settings Thought: Firstly, | need to use the code interpreter API to load the data and taking a look at the first f ew rows to understand what we're working with. Action: code_interpreter Action Input: py © Copy code import pandas as pd # Load the data df = pd.read_csv("scatter_date.csv") # Display the first few rows of the dataframe df. head( ) Observation: The output is: © Copy code x y sizes colors 7.577257 1.952430 46.433060 24.878965 4.873626 2.574814 38.541454 21.506782 4.192995 5.250496 62.999630 23.484101 @.273015 3.678973 63.538505 50.963053 3.445224 2.462327 59.921040 27.207964 Thought: Now that we have the data loaded, we can start thinking about how to create the scatter plot. W e'll need to decide on the x-axis and y-axis labels, as well as the size and color settings for the poi nts.
2309.16609#300
2309.16609#302
2309.16609
[ "2305.20050" ]
2309.16609#302
Qwen Technical Report
Action: code_interpreter Action Input: Action Input: py © Copy code py © Copy code import matplotlib.pyplot as plt import matplotlib.pyplot as plt import pandas as pd # Set the x-axis label plt.xlabel("x") df = pd.read_csv(â scatter_data.csyv") # Set the y-axis label plt.ylabel("y") fig, ax = plt.subplots() # Set the size of the points plt.scatter(df["x"], df["y"], s=df["sizes"], c=df["colorsâ ]) ax.scatter(df['x'], df['y'], sadf[â sizes'], cadf{â colorsâ
2309.16609#301
2309.16609#303
2309.16609
[ "2305.20050" ]
2309.16609#303
Qwen Technical Report
]) # Show the plot plt.show() plt.show() Observation: Observation: 8 8 0 Thought: The code interpreter returned the result. Thought-The plot is now complete. We can see that the points are scattered according to the x an dy values, and the size and color of the points are determined by the values in the "sizes" and â c Final Answer; The scatter plot has been successfully created using the data from the "scatter_dat as olors" columns of the dataframe. a.csv" file, The size of each point is determined by the â sizesâ column, and the color is determined by the â colorsâ column.
2309.16609#302
2309.16609#304
2309.16609
[ "2305.20050" ]
2309.16609#304
Qwen Technical Report
This allows for a high degree of customization in the appearance of the sc Final Answer: The scatter plot is now complete. atter plot. [ user fesin: (Uploaded scatter_data.csv] Create a scatter plot with different size and color settings Observation: error: © Copy code Code Thought: | should use the code_interpreter tool to execute Python code to create a scatter plot wi th different size and color settings. Action: code_interpreter KeyError Traceback (most recent call las Action Input t) Input In [942], in <cell line: 38>() py © Copy code 35 data = pd.read_csv("tmp_upload_file/kZvwpQmL2RjSFSVSHHXNST/scatte r_data.csv') import matplotlib.pyplot as pit 37 # Create a scatter plot with different size and color settings ---> 38 plt.scatter(data['x'], data["y'], sadata["size'], cadata[â colo # Load the data r')) data = pd.read_csv('scatter_data.csv") 40 # Show the plot 41 plt.show() # Create a scatter plot with different size and color settings plt.scatter(data['x'], data['y'], s=data['sizeâ ], c=data[{"color']) # Show the plot KeyError: â
2309.16609#303
2309.16609#305
2309.16609
[ "2305.20050" ]
2309.16609#305
Qwen Technical Report
sizeâ plt.show() r Question: [Uploaded scatter_data.csv] Create a scatter plot with different size and color settings Qwen Great, you've uploaded a file named â scatter_data.csv", This appears to be a CSV file, which is co mmonly used for storing tabular data, We can use this data to create a scatter plot with different size and color settings. Thought: Firstly, | need to use the code interpreter API to load the data and taking a look at the fi rst few rows to understand what we're working with. Action: code_interpreter Action Input: py © Copy code import pandas as pd # Load the data df = pd.read_csv("scatter_data.csv") # Display the first few rows of the dataframe df.head() Observation: execute_result: © Copy code x y sizes colors @ 7.577257 1.952430 46.433866 24.87@965 1 #4,87362@ 2.574614 38.541454 21.586782 2 4.192995 5.256499 60.999636 23.4@4101 3 0.273015 3.678973 63.538505 5.963053 4 3.445224 2.462327 59.921846 27.207964 Thought: Now | understand the data format. | need to use the code_interpreter API to run the co de Action: code_interpreter Action Input: py © Copy code import matplotlib.pyplot as plt import pandas as pd df = pd.read_csv(â scatter_data.csyv") fig, ax = plt.subplots() ax.scatter(df['x'], df['y'], sadf[â sizes'], cadf{â colorsâ
2309.16609#304
2309.16609#306
2309.16609
[ "2305.20050" ]
2309.16609#306
Qwen Technical Report
]) plt.show() Observation: 8 0 Thought: The code interpreter returned the result. Final Answer; The scatter plot has been successfully created using the data from the "scatter_dat a.csv" file, The size of each point is determined by the â sizesâ column, and the color is determined by the â colorsâ column. This allows for a high degree of customization in the appearance of the sc atter plot. Thought: Firstly, | need to use the code interpreter API to load the data and taking a look at the first f ew rows to understand what we're working with. Action: code_interpreter Action Input: py © Copy code import pandas as pd # Load the data df = pd.read_csv("scatter_date.csv") # Display the first few rows of the dataframe df. head( ) Observation: The output is: © Copy code x y sizes colors 7.577257 1.952430 46.433060 24.878965 4.873626 2.574814 38.541454 21.506782 4.192995 5.250496 62.999630 23.484101 @.273015 3.678973 63.538505 50.963053 3.445224 2.462327 59.921040 27.207964 Thought: Now that we have the data loaded, we can start thinking about how to create the scatter plot. W e'll need to decide on the x-axis and y-axis labels, as well as the size and color settings for the poi nts. Action: code_interpreter Action Input: py © Copy code import matplotlib.pyplot as plt # Set the x-axis label plt.xlabel("x") # Set the y-axis label plt.ylabel("y") # Set the size of the points plt.scatter(df["x"], df["y"], s=df["sizes"], c=df["colorsâ ]) # Show the plot plt.show() Observation: 8 Thought-The plot is now complete. We can see that the points are scattered according to the x an dy values, and the size and color of the points are determined by the values in the "sizes" and â c as olors" columns of the dataframe.
2309.16609#305
2309.16609#307
2309.16609
[ "2305.20050" ]
2309.16609#307
Qwen Technical Report
Final Answer: The scatter plot is now complete. Figure 5: Example showcasing QWEN-CHATâ s ability in using a code interpreter via ReAct prompting. The ReAct instruction is omitted for clarity. QWEN creates a two-step plan and first investigates the columns present in the CSV file before proceeding to draw the plot, as shown in the top-left figure. CODE LLAMA, however, attempts to draw the plot based on non-existent columns in its initial attempt, as seen in the bottom figure. CODE LLAMA can only reliably perform the task if the columns are provided in the user query, as shown in the top-right figure.
2309.16609#306
2309.16609#308
2309.16609
[ "2305.20050" ]
2309.16609#308
Qwen Technical Report
59
2309.16609#307
2309.16609
[ "2305.20050" ]
2309.16797#0
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
3 2 0 2 p e S 8 2 ] L C . s c [ 1 v 7 9 7 6 1 . 9 0 3 2 : v i X r a © Google DeepMind # PROMPTBREEDER: SELF-REFERENTIAL SELF-IMPROVEMENT VIA PROMPT EVOLUTION Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rockt¨aschel # Google DeepMind {chrisantha,dylski,henrykm,osindero,rocktaschel}@google.com # ABSTRACT Popular prompt strategies like Chain-of-Thought Prompting can dramatically im- prove the reasoning abilities of Large Language Models (LLMs) in various do- mains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present PROMPTBREEDER, a general-purpose self-referential self- improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, evalu- ates them for fitness on a training set, and repeats this process over multiple gen- erations to evolve task-prompts. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutation-prompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arith- metic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification. # INTRODUCTION Prompting is central to the downstream performance of foundation models. For example, different prompt strategies1 can have a significant impact on a modelâ
2309.16797#1
2309.16797
[ "2305.03495" ]
2309.16797#1
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
s reasoning abilities (Wei et al., 2022; Nye et al., 2021; Zhou et al., 2022; Wang et al., 2022; Zhou et al., 2023; Wang et al., 2023b), multi- modal processing abilities (Yang et al., 2023b; Wang et al., 2023d), or tool use abilities (Yao et al., 2022; Schick et al., 2023). Furthermore, prompting can improve model distillation (Wang et al., 2023c; Hsieh et al., 2023) and it can be used to simulate agentic behavior (Wang et al., 2023a; Park et al., 2023; Wu et al., 2023). However, these prompt strategies are manually engineered. Since the specific way a prompt is phrased can have a dramatic effect on its utility (Madaan & Yazdanbakhsh, 2022), it raises the question of whether prompt engineering can be automated. Automatic Prompt Engineer (APE, Zhou et al., 2023) attempts to address this by generating an initial distribution of prompts using another prompt that infers the problem from a number of input-output examples from the dataset. However, Zhou et al. found â diminishing returns to further selection rounds as the qual- ity seems to stabilize after three roundsâ , and consequently abandoned the use of an iterative APE. We propose a solution to the problem of diminishing returns via a diversity maintaining evolutionary algorithm for self-referential self-improvement of prompts for LLMs. Schmidhuber (1990) notes that the â program of a neural network is its weight matrixâ . Con- sequently, this â programâ can be changed in a self-referential way by the neural network it- self (Schmidhuber, 1993; Irie et al., 2022). Such a neural network that improves itself, as well as improving the way it improves itself, might be an important stepping stone towards open-ended self-referential self-improvement of AIs (Schmidhuber, 2003). However, self-improvement via self- referential weight matrices is costly as it requires additional parameters that modify all of the modelâ
2309.16797#0
2309.16797#2
2309.16797
[ "2305.03495" ]
2309.16797#2
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
s # 1See Appendix A for definitions of terminology. 1 Method LLM MultiArith* SingleEq* AddSub* SVAMP* SQA CSQA AQuA-RAT GSM8K t o h s - o r e Z CoT text-davinci-003 PoT text-davinci-003 text-davinci-003 text-davinci-003 PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PS PS+ PS PS+ APE OPRO PB (ours) (83.8) (92.2) (87.2) (91.8) 97.7 92.5 95.8 â 99.7 (88.1) (91.7) (89.2) (94.7) 90.6 94.7 82.2 â 96.4 (85.3) (85.1) (88.1) (92.2) 72.4 74.4 72.2 â 87.8 (69.9) (70.8) (72.0) (75.7) 83.8 86.3 73.0 â 90.2 (63.8) â â (65.4) 50.0 50.1 38.4 â 71.8 (65.2) â â (71.9) 77.9 73.3 67.3 â 85.4 (38.9) (43.9) (42.5) (46.0) 40.2 39.4 45.7 â 62.2 (56.4) (57.0) (58.2) (59.3) 59.0 60.5 77.9 80.2 83.9 - Manual-CoT text-davinci-003 w Auto-CoT text-davinci-003 e F PaLM 2-L PB (ours) (93.6) (95.5) 100.0 (93.5) (92.1) 98.9 (91.6) (90.8) 87.1 (80.3) (78.1) 93.7 (71.2) â 80.2 (78.3) â
2309.16797#1
2309.16797#3
2309.16797
[ "2305.03495" ]
2309.16797#3
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
85.9 (48.4) (41.7) 64.6 (58.4) (57.1) 83.5 Table 1: Promptbreeder (PB) comparison to Chain-of-Thought (Manual-CoT, Wei et al., 2022), Zero-shot CoT (Kojima et al., 2022), Program-of-Thoughts (PoT, Chen et al., 2022), Auto- CoT (Zhang et al., 2023b), OPRO (Yang et al., 2023a), Automatic Prompt Engineer Zero-shot prompt (APE, Zhou et al., 2023), Plan-and-Solve with (PS+) and without the improved prompt (PS, Wang et al., 2023b) and using PaLM 2-L (Anil et al., 2023) as the underlying LLM (APE, PSPaLM 2-L/PS+PaLM 2-L). Best results in both the zero-shot and few-shot categories are highlighted in bold. Results in brackets are directly taken from the Plan-and-Solve paper which uses text- davinci-003 (Brown et al., 2020). For datasets with astericks (MultiArith*, SingleEq*, AddSub*, and SVAMP*), we randomly took half of the examples for training and report accuracy on the re- maining test set. See Section 4 and Appendix I for details on the prompts and datasets. parameters. Since behaviors and capabilities of LLMs are significantly influenced by the prompts that we provide to them, we can similarly think of prompts as the program of an LLM (Zhou et al., 2023). In this view, changing a prompt strategy such as the Scratchpad method (Nye et al., 2021) or Chain-of-Thought Prompting (Wei et al., 2022) corresponds to changing the â programâ of the LLM. Taking this analogy further, we can use the LLM itself to change its prompts, as well as the way it changes these prompts, moving us towards a fully self-referential self-improving systems grounded in LLMs. In this paper, we introduce PROMPTBREEDER (PB) for self-referential self-improvement of LLMs.
2309.16797#2
2309.16797#4
2309.16797
[ "2305.03495" ]
2309.16797#4
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Given a seed set of mutation-prompts (i.e. instructions to modify a task-prompt), thinking-styles (i.e. text descriptions of general cognitive heuristics), and a domain-specific problem description, PB generates variations of the task-prompts and mutation-prompts, exploiting the fact that LLMs can be prompted to act as mutation operators (Meyerson et al., 2023). Based on the fitness of the evolved task-prompts as measured on the training set, we select a subset of evolutionary units consisting of task-prompts and their associated mutation-prompt, to transmit to future generations. Over multiple generations of PB, we observe prompts adapting to the domain at hand. For example, in a mathematical domain, PB evolved the task-prompt "Show all your working. II. You should use the correct mathematical notation and vocabulary, where appropriate. words. V. Your workings out should be neat and legible" on GSM8K your answers. (see Appendix J). On a wide range of commonly used benchmarks spanning commonsense reasoning, arithmetic, and ethics, we find that PB outperforms state-of-the-art methods like Chain-of-Thought (Wei et al., 2022) and Plan-and-Solve (Wang et al., 2023b) prompting. As PB does not require any parameter updates for self-referential self-improvement, we believe this approach points to an interesting future where larger and more capable LLMs could further amplify the gains of our approach. In summary, this paper makes the following main contributions: (i) we introduce Promptbreeder, a self-referential self-improvement method for LLMs that evolves prompts for a domain at hand, as well as improves the way it is evolving these prompts, (ii) we report improvements over state-of- the-art prompt strategies on a wide range of commonly used arithemic and commonsense reasoning benchmarks, and (iii) we investigate the various self-referential components of Promptbreeder and their contribution to our results.
2309.16797#3
2309.16797#5
2309.16797
[ "2305.03495" ]
2309.16797#5
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
2 Initialization of Population of Task-Prompts and Mutation-Prompts q â 1 Problem Description , | Mutation Se 2 | Prompts 1 specific to GSM8K, AQuA, Sample ETHOS, SVAMP etc. Sample v â Let's think step by stepâ + â Change this instruction to make it more funâ + INSTRUCTION:â + â Solve this math word problemâ + â INSTRUCTION MUTANT =â Make up a systematic answer that makes you look quite cleverâ Mutate Populate Mutation Operators Population (N Task-Prompts and their Mutation-Prompts) P: "Make up a systematic answer that makes you look quite clever" || 9 "Change this instruction to make it more fun" â Estimation of Distribution Mutation | Direct Mutation < P: "Draw a diagram representing the math problem" as M: "Mutate the prompt with an unexpected twist" a : Lamarckian Mutation I Eypeguucaton Generate task-prompt | â Lpepiace pp t's think step through this maths problem" mm lutate mutation-prompt | som the "working out" Modify the instruction like no self-respecting LLM would" . P: "SOLUTION:" â Consider how a better teacher would put this" 0.9 Estimated fitness from a batch of training Q&A pairs Estimated fitness from a batch of training Q&A pairs
2309.16797#4
2309.16797#6
2309.16797
[ "2305.03495" ]
2309.16797#6
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Figure 1: Overview of Promptbreeder. Given a problem description and an initial set of general â thinking-stylesâ and mutation-prompts, Promptbreeder generates a population of units of evolution, each unit consisting of typically two task-prompts and a mutation-prompt. We then run a standard binary tournament genetic algorithm (Harvey, 2011). To determine the fitness of a task-prompt we evaluate its performance on a random batch of training data. Over multiple generations, Prompt- breeder subsequently mutates task-prompts as well as mutation-prompts using five different classes of mutation operators. The former leads to increasingly domain-adaptive task-prompts whereas the latter evolves increasingly useful mutation-prompts in a self-referential way. # 2 RELATED WORK Prompting an LLM in the right way is essential to its downstream performance (Moradi & Samwald, 2021; Madaan & Yazdanbakhsh, 2022; Zhou et al., 2023). Indeed, even the order in which prompts are presented can heavily influence LLM performance (Lu et al., 2022). A number of recent works have focused on devising better prompt strategies, or even automating such prompt engineering. Prompting: Chain-of-Thought Prompting (CoT, Wei et al., 2022) is a popular prompt strategy which provides intermediate reasoning steps as few-shot prompts to an LLM, thereby significantly improv- ing its arithmetic, commonsense, and symbolic reasoning abilities. Notably, the gains of CoT are more pronounced for stronger LLMs. This is intriguing, as it points to the possibility of increasingly capable (and potentially open-ended) self-improving mechanisms on top of adept LLMsâ a hypoth- esis that Promptbreeder directly builds upon. Instead of few-shot CoT prompting, Kojima et al. (2022) demonstrate that LLMs can also be prompted zero-shot (e.g. "Letâ s think step by step") to produce their own chains of thoughts (Zero-shot CoT) that improve reasoning abilities. Self-Consistency (CoT-SC, Wang et al., 2022) extends CoT by sampling a diverse set of workings out and selecting the most consistent answer. Tree of Thoughts (ToT, Yao et al., 2023) generalizes CoT to multiple workings out that can be expanded or backtracked from.
2309.16797#5
2309.16797#7
2309.16797
[ "2305.03495" ]
2309.16797#7
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Graph of Thoughts (GoT, Besta et al., 2023) is a further generalization to arbitrary graph structures. Plan-and-Solve Prompt- ing (PS, Wang et al., 2023b) encourages an LLM to first devise a plan to solve a problem before attempting to solve it. Similarly, Least-to-Most Prompting (Zhou et al., 2022) encourages an LLM to decompose a problem into subparts, and then to solve each part individually before synthesizing an answer. Self-Refine (Madaan et al., 2023) prompts an LLM to generate a response, to provide feedback on the response, and to finally refine the solution. 3 In contrast to gradient-free approaches above, Soft Prompting approaches (e.g., Liu et al., 2021; Qin & Eisner, 2021; Lester et al., 2021) directly fine-tune continuous prompt representations. Huang et al. (2022) use CoT and CoT-SC on an unlabelled dataset of questions, and subsequently fine- tune an LLM based on generated solutions. Similarly, Zelikman et al. (2022) uses CoT to generate rationales and fine-tunes the LLM based on those examples and rationales that yielded the correct answer. However, as argued by Zhou et al. (2023), any approach that updates all or a portion of LLM parameters will not scale as models get bigger and, moreover, will not work with the increasing number of LLMs hidden behind an API. All of the prompt engineering approaches above are domain agnostic but hand designed. Central to our work is the hypothesis that we could do better by employing an automated self-improvement process that can adapt prompts to a domain at hand. Auto-CoT (Zhang et al., 2023b) and Automatic- CoT (Shum et al., 2023) automatically find reasoning chains for Few-Shot CoT. Automatic Prompt Engineer (APE, Zhou et al., 2023) uses one generator-prompt to generate prompt candidates, and another mutation-prompt to mutate them. In contrast to APE, our work performs compositional task-specific initialization of mutation-prompts, subsequent online mutation of mutation-prompts, uses special mutation operators that take into account the whole population and elite history, and uses diversity-maintenance methodsâ
2309.16797#6
2309.16797#8
2309.16797
[ "2305.03495" ]
2309.16797#8
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
all of which help avoid the problem of diminishing returns and diversity loss suffered by APE. Concurrently to our work, Yang et al. (2023a) developed Optimization by PROmpting (OPRO), a prompt optimization method that varies prompts using a single complex mutation prompt, and evaluates newly generated prompts on a small fixed training set of problems. In contrast, Prompt- breeder autonomously evolves multiple LLM generated mutation-prompts as well as task-prompts, and evaluates fitness on random subsets from the whole training set during evolution. At the time of its release, OPRO achieved a score of 80.2% via the optimized zero-shot prompt "Take a deep breath and work on this problem step-by-step" on GSM8K. Promptbreeder surpasses this with 83.9% in the zero-shot setting with the unintuitively simple prompt "SOLUTION""â further evidence for the sensitivity of LLMs to prompts and the importance on finding effective prompts automatically. Also concurrently to our work, Guo et al. (2023) developed EvoPrompt, which uses a fixed mutation (and crossover) prompt, as well as a prompt that asks for a mutant of the difference between two parent prompts, to produce offspring prompts. EvoPrompt is initialized with a whole population of initial hand-designed task tailored prompts rather than a single prob- lem description as we do. In contrast to the two approaches above, Promptbreeder uses LLMs to self-referentially improve mutation-prompts, and it is able to evolve contexts as well. Self-Referential Self-Improvement: Developing an open-ended system that can improve itself as well as improving the way it is improving itself (Schmidhuber, 1993; 2003) is a long-standing open problem in AI research. Schmidhuber (1993) introduced an â introspectiveâ neural network with a self-referential weight matrix that can modify its own weights and, thus, also modify those weights that are governing how its own weights are modified. Recently, Irie et al. (2022) proposed a more scalable self-referential weight matrix taking inspiration from fast weight programmers (Schmid- huber, 1992).
2309.16797#7
2309.16797#9
2309.16797
[ "2305.03495" ]
2309.16797#9
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Kirsch & Schmidhuber (2022) propose a self-referential meta-learning approach, combining self-referential weight matrices with ideas from G¨odel Machines (Schmidhuber, 2003), i.e., to allocate more computational resources to better performing solutions. However, since these approaches directly modify parameters of a model, it is unclear how to scale them to the increas- ing number of parameters in modern LLMs. In contrast, for Promptbreeder the substrate of self- referential self-improvement is natural language, avoiding costly parameter updates altogether. Open-Endedness and LLMs: Promptbreeder makes use of the observation by Lehman et al. (2022), Meyerson et al. (2023) and Chen et al. (2023) that LLMs are effective at generating mutations from examples. In addition, LLMs encode human notions of interestingness and can be used to auto- matically quantify novelty (Zhang et al., 2023a). Promptbreeder is related to Picbreeder (Secretan et al., 2008), an open-ended human-in-the-loop system that evolves increasingly interesting images. While Picbreeder explores the space of images, Promptbreeder explores the space of prompts and does so without humans in the loop. As Promptbreeder is proposing mutated prompts to itself, it is an example of a system transitioning from â learning from dataâ to â learning what data to learn fromâ (Jiang et al., 2022). 4 # 3 PROMPTBREEDER We introduce Promptbreeder, a prompt evolution system that can automatically explore prompts for a given domain and that is able to find task-prompts that improve an LLMâ s ability to derive answers to questions in that domain. Promptbreeder is general purpose in that the same system is able to adapt to many different domains. Promptbreeder makes use of the observation that LLMs can be used to generate variations of input text (Lehman et al., 2022; Meyerson et al., 2023; Chen et al., 2023). Figure 1 gives an overview of our method. We are interested in evolving task-prompts.
2309.16797#8
2309.16797#10
2309.16797
[ "2305.03495" ]
2309.16797#10
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
A task-prompt P is a string used to condition the context of an LLM in advance of some further input Q, intended to ensure a better response than if Q had been presented in the absence of P . To evaluate the fitness of each evolved task-prompt, we sample a batch of 100 Q&A pairs from the entire training set of the domain at hand.2 Promptbreeder generates task-prompts according to an evolutionary algorithm. The mutation oper- ator for this algorithm is itself an LLM, conditioned on a mutation-prompt M . That is, a mutated task prompt P â ² is defined by P â ² = LLM(M + P ) where â +â corresponds to string concatenation. A variety of such mutation-prompts are described in Section 3.2. Promptbreederâ s main self-referential mechanism stems from applying the evolutionary algorithm not just to task-prompts but also to mutation-prompts. The mutation operator for this meta-level algorithm is again an LLM, now conditioned on a hyper-mutation prompt H. That is, we obtain a mutated mutation-prompt M â ² via M â ² = LLM(H + M ). Given a set of â thinking stylesâ T and a set of initial mutation-prompts M, as well as a domain- specific problem description D, Promptbreeder initializes a population of mutated task-prompts (see Section 3.1). To clarify, a unit of evolution consists of a set of task-prompts, a mutation-prompt and in the few-shot case, a set of correct workings out (i.e. step-by-step or â chains-of-thoughtâ reasoning steps that led to the correct answer). This means task-prompts and mutation-prompts are in 1:1 correspondence. To evolve this population, we employ a binary tournament genetic algorithm framework (Harvey, 2011): we sample two individuals from the population, we take the individual with the higher fitness, mutate it (see next section) and overwrite the loser with the mutated copy of the winner. 3.1 PROMPTBREEDER INITIALIZATION the initialization steps used to produce the task- To give a concrete example, consider prompts and mutation-prompts for GSM8K (a â
2309.16797#9
2309.16797#11
2309.16797
[ "2305.03495" ]
2309.16797#11
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
grade school mathsâ word problem dataset). The problem description is "Solve the math word problem, giving your answer as an arabic numeral". Because Plan-and-Solve (Wang et al., 2023b) uses two task-prompts we also evolve two task-prompts (plus a mutation-prompt) per unit of evolution. In order to promote diversity in the initial prompts, we generate the initial task-prompts by concatenating (for each task- prompt) a randomly drawn â mutation-promptâ (e.g. "Make a variant of the prompt.") and a randomly drawn â thinking-styleâ (e.g. "Letâ s think step by step") to the problem descrip- tion, and provide that to the LLM to produce a continuation, resulting in an initial task-prompt. We do this twice to produce the two initial task-prompts per unit. Both the mutation-prompt and the thinking-style are randomly sampled from an initial set of mutation-prompts and a set of thinking- styles (see Appendices C, D and G for the full sets). The mutation-prompt is added to the unit of evolution and so is associated with its specific task-prompt throughout the evolutionary run. For the example above, the complete input string to the LLM to make an initial task-prompt could be "Make a variant of the prompt. INSTRUCTION: Solve the math word problem, giving your answer as an arabic numeral. INSTRUCTION MUTANT:". Note how the control strings "INSTRUCTION" and "INSTRUCTION MUTANT" are added to encourage an appropriate continuation. Table 4 in Appendix E shows examples of the initial prompts generated in this way. 2Our prompt strategy sequentially applies two task-prompts. The first task-prompt + question produces a continuation. The continuation + second task-prompt produces the final answer. 5 3.2 MUTATION OPERATORS As shown in Figure 1, there are nine operators falling into five broad classes which drive the ex- ploration of prompt strategies. For each replication event only one of nine mutation operators is applied (we sample with uniform probability over the nine operators to decide which mutation op- erator to apply). The rationale for using this diverse set of operators is to enable the LLM to explore a large space of cognitive methods of linguistic self-questioning, by repeatedly changing the fram- ing of the problem as well as retrieving mental models expressed in natural language that can help tackle a given reasoning challenge.
2309.16797#10
2309.16797#12
2309.16797
[ "2305.03495" ]
2309.16797#12
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Investigations from insight learning strongly suggest that diverse representational re-description is key to problem solving ( ¨Ollinger & Knoblich, 2009)â a principle that we attempt to recreate via self-referential self-improvement with natural language as the sub- strate. Figure 2 illustrates in what way Promptbreeder is self-referential (see Appendix F for a more detailed explanation). 3.2.1 DIRECT MUTATION The simplest class of mutation operators directly generate a new task-prompt P â ² from either one existing task-prompt P (first-order prompt generation) or from a general prompt that encourages free-form generation of new task-promptsâ i.e. not using an existing parent, thus zero-order prompt generation. Zero-order Prompt Generation: We generate a new task-prompt by concatenating the problem de- scription D (e.g. "Solve the math word problem, giving your answer as an arabic numeral") with the prompt "A list of 100 hints:", which invites the LLM to come up with a new hint that could help solve a problem in the given problem domain. We extract the first gener- ated hint as the new task-prompt. Crucially, this new task-prompt does not depend on any previously found task-prompt. Instead, it is re-generated from the problem description each time. Our rationale for including this zero-order operator is that where prompt evolution diverges, this operator allows us to generate new task-prompts closely related to the original problem description, similar to uni- form re-sampling in automated curriculum learning approaches (Jiang et al., 2021b;a; Park et al., 2023; Parker-Holder et al., 2022). First-order Prompt Generation: We concatenate the mutation-prompt (red), to the parent task-prompt (blue), and pass it to the LLM to produce the mutated task-prompt. For example "Say that instruction again in another way. DONâ T use any of the words in the original instruction thereâ s a good chap. math word problem, giving your answer as an arabic numeral. MUTANT: ".
2309.16797#11
2309.16797#13
2309.16797
[ "2305.03495" ]
2309.16797#13
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
This procedure is identical to the initialization method, except that a randomly sampled thinking-style string is not used. First-order prompt generation is Promptbreederâ s standard asexual mutation operator, and it is the core of every genetic algorithmâ taking one parental genotype (task-prompt) and applying the mutation to it (in this case influenced by the mutation-prompt). 3.2.2 ESTIMATION OF DISTRIBUTION MUTATION The next class of mutation operators condition not just on zero or one parent, but instead on a set of parents. As such, they may be more expressive by considering patterns in the population. Estimation of Distribution (EDA) Mutation: Inspired by Hauschild & Pelikan (2011), we pro- vide a filtered and numbered list of the current population of task-prompts to the LLM and ask it to continue this list with new task-prompts. We filter the population of prompts on the basis of BERT (Devlin et al., 2019) embedding cosine similarities between each otherâ an individual is not included in the list if it is more than 0.95 similar to any other entry in the list, thus encouraging diversity (cf. quality-diversity methods (Lehman & Stanley, 2011b;a; Mouret & Clune, 2015)). The prompts are listed in random order and we do not give the LLM access to the fitness values of in- dividuals in the populationâ we found in preliminary experiments that the LLM did not understand these fitness values3 and resorted to generating copies of entries in the list. 3This is contrary to recent findings by Mirchandani et al. (2023). We leave it for future work to revisit whether LLMs can interpret fitness values for improved prompt evolution.
2309.16797#12
2309.16797#14
2309.16797
[ "2305.03495" ]
2309.16797#14
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
6 Direct Mutation-Prompt Guided Hyper Mutation Promptbreeder H H LLM M â ² LLM M M M P LLM P â ² LLM P â ² LLM P â ² M â ¼ M LLM P P D LLM P T â ¼ T (a) (b) (c) (d) M â ² P â ² Figure 2:
2309.16797#13
2309.16797#15
2309.16797
[ "2305.03495" ]
2309.16797#15
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Overview of multiple variants of self-referential prompt evolution. In (a), the LLM is directly used to generate variations P â ² of a prompt strategy P (cf. Meyerson et al., 2023). Using a mutation prompt M , we can explicitly prompt an LLM to produce variations (b). By using a hyper mutation prompt H, we can also evolve the mutation prompt itself, turning the system into a self-referential one (c). Promptbreeder (d) improves the diversity of evolved prompts and mutation prompts by generating an initial population of prompt strategies from a set of seed thinking-styles T , mutation-prompts M, as well as a high level description D of the problem domain. EDA Rank and Index Mutation: This is a variant of the above in which task-prompts are listed in fitness order. Preliminary experiments showed that the LLM is more likely to generate entries that are similar to the elements appearing later in the list. This is in line with similar findings of recency effects in LLMs (Liu et al., 2023). Therefore, after filtering in the same way as before, we ordered the task-prompts in the population by ascending order of fitness. The top of the list is prefixed by the following prompt: "INSTRUCTION: " + <<mutation-prompt>> + " A List of Responses in descending order of score." + <<last index + 1>> + "is the It resembles" + << last index>> + "more than it does (1)". best response. Note that we have â liedâ to the LLM by telling it that the order is descending.
2309.16797#14
2309.16797#16
2309.16797
[ "2305.03495" ]
2309.16797#16
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
This is because otherwise it is too biased towards producing a new entry that is too similar to the final entry. The contradiction between the ascending ordering and the statement that it is a descending ordering appears to improve the diversity of sampling. The rationale for this operator is again to represent the current distribution in such a way that high fitness and yet diverse extrapolations are suggested by the LLM. Lineage Based Mutation: For each unit of evolution, we store a history of the individuals in its lin- eage that were the best in the population, i.e., a historical chronological list of elites. This list is pro- vided to the LLM in chronological order (not filtered by diversity), with the heading "GENOTYPES FOUND IN ASCENDING ORDER OF QUALITY" to produce a novel prompt as continuation. The ra- tionale for this operator is that we expect the signal of improving genotype prompts may be stronger than the signal from prompts in the current population since they provide a gradient of bad to good prompts that could be followed (assuming this signal can be used by the LLM). 3.2.3 HYPERMUTATION: MUTATION OF MUTATION-PROMPTS While the mutation operators above might already explore diverse task-prompts, a self-improving system should ideally also improve the way it is improving itself in a self-referential way. Our third class of mutation operators includes hyper-mutation operators concerned with the evolution of evolvability (Dawkins, 2003; Pigliucci, 2008; Payne & Wagner, 2019; Gajewski et al., 2019)â those which modify the search/exploration process rather than the task reward obtaining process directly.4 Zero-order Hyper-Mutation: We concatenate the original problem description to a randomly sam- pled thinking-style, and feed it to the LLM to generate a new mutation-prompt. The resulting mutation-prompt is applied to a task-prompt to make a variant of the task-prompt as in First-order Prompt Generation (see Section 3.2.1). Note that this zero-order meta-mutation operator is identical to that used during initialization. The rationale for this operator is to generate mutation operators in a way similar to initialization, while also bringing in knowledge from the set of thinking styles. 4This is similar to population based training (Jaderberg et al., 2017a)â
2309.16797#15
2309.16797#17
2309.16797
[ "2305.03495" ]
2309.16797#17
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
instead of applying it to hyperpa- rameters such as learning rates, it applies to the mutation-prompts of Promptbreeder. 7 First-order Hyper-Mutation: We concatenate the hyper-mutation-prompt "Please summarize and improve the following instruction:" to a mutation-prompt so that the LLM gener- ates a new mutation-prompt. This newly generated mutation-prompt is then applied to the task- prompt of that unit (see First-Order Prompt Generation in Section 3.2.1). In this way, we can eval- uate the influence of the hyper-mutation via its newly generated mutation-prompt on the quality of the evolved downstream task-prompt at once. 3.2.4 LAMARCKIAN MUTATION For this class of mutation operators we mimic a Lamarckian process. We want to use a successful phenotype (i.e. the concrete working out used to produce correct answers induced by an evolved task-prompt) to generate a new genotype (i.e. a mutant task-prompt). Several processes of this form have appeared in the literature of LLMs, e.g. STaR (Zelikman et al., 2022), APO (Pryzant et al., 2023), and APE (Zhou et al., 2023). Working Out to Task-Prompt: This is a â Lamarckianâ mutation operator similar to instruction induction in APE. We give an LLM a previously generated working out that led to a correct answer via the following prompt: "I gave a friend an instruction and some advice. Here are the correct examples of his workings out + <<correct working out>> + The instruction was:". This is effectively reverse-engineering the task-prompt from a given working out. An effective example of this is shown in Appendix H. This kind of operator is critical when the problem description is absent, insufficient, or misleading. 3.2.5 PROMPT CROSSOVER AND CONTEXT SHUFFLING Our last class of mutation operators are crossover operators and operators for shuffling the few-shot context examples present in the units of evolution. Prompt Crossover: After a mutation operator is applied, with 10% chance a task-prompt is replaced with a randomly chosen task-prompt from another member of the population. This member is chosen according to fitness proportionate selection. Crossover is not applied to mutation-prompts, only to the task-prompts. Context Shuffling:
2309.16797#16
2309.16797#18
2309.16797
[ "2305.03495" ]
2309.16797#18
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Promptbreeder can simultaneously evolve the task-prompts, mutation-prompts and the set of correct workings out known as the few-shot context. To achieve the later, we fill up a few-shot context with only workings out that led to correct answers. During evaluation we provide this few shot-context before the task-prompt, providing guidance as to the form of the working out that is desired. If the few-shot context list is full, a single randomly sampled new correct working out replaces an existing working out from the list after fitness evaluation of a unit on a new set of questions. In addition, with a 10% chance we resample the whole context list with probability inverse to the maximum context list length. # 4 EXPERIMENTS We used a population size of 50 units, evolved for typically 20-30 generations, where a generation involves forming random pairs of all individuals in the population and competing them against each other. To evaluate Promptbreeder, we use the datasets from state-of-the-art prompt strategies such as Plan-and-Solve, spanning arithmetic reasoning with GSM8K (Cobbe et al., 2021), SVAMP (Pa- tel et al., 2021), MultiArith (Roy & Roth, 2016), AddSub (Hosseini et al., 2014), AQuA-RAT (Ling et al., 2017), and SingleEq (Koncel-Kedziorski et al., 2015), commonsense reasoning with Common- senseQA (CSQA, Talmor et al., 2019) and StrategyQA (SQA, Geva et al., 2021), instruction induc- tion tasks from (Honovich et al., 2023), and hate speech classification on the ETHOS dataset (Mollas et al., 2022).
2309.16797#17
2309.16797#19
2309.16797
[ "2305.03495" ]
2309.16797#19
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
See Appendix I for details. # 5 RESULTS AND DISCUSSION We present results of Promptbreeder (PB) in comparison to state-of-the-art prompt strategies on a range of commonly used reasoning benchmarks in Table 1. PB outperforms PS+, the best Plan-and- Solve (Wang et al., 2023b) prompting technique. Note that the performance of PS+ is improved 8 by using PaLM 2-L (Anil et al., 2023) as the underlying LLM (PS+PaLM 2-L) on all datasets ex- cept ADDSUB compared to text-davinci-003 results in the original paper. On all other datasets, zero-shot PB accuracy is higher than PS+, with further improvement in the few-shot case when ex- amples of discovered solutions are included with the prompts. In Table 6 in Appendix J, we show the best evolved zero-shot prompts. The best few-shot candidates are shown in Appendix J.5 on- wards. Appendix K shows few-shot results and their controls on the Instruction Induction tasks from the APE paper. To investigate the ability of Promptbreeder to evolve complex domain-specific prompts for a downstream task, we applied it to the ETHOS Hate Speech Classification prob- lem (Mollas et al., 2022). Promptbreeder was able to evolve a prompt strategy consisting of two sequentially applied relatively long prompts (see Appendix J.1) that scored 89% on ETHOSâ an improvement over the hand-designed prompt "Determine whether a text contains hate speech" which scores only 80%. This demonstrates that Promptbreeder is capable of intricate domain-adaptation to a task at hand. Appendix B shows a typical evolutionary run and the prompts evolved, showing that unlike iterative APE, fitness continues to increase throughout the run. We analysed the best mutation-prompts used during a run for GSM8K. Table 7 in Appendix J.3 shows the best evolved mutation prompts according to their scores (the proportion of times that when the mutation-prompt was applied to a task-prompt in an unit, a better task-prompt was produced). Table 8 in Appendix J.4 shows in descending order, the percentage of times that the different kinds of mutation operators resulted in an improvement when applied to a task-prompt in the population.
2309.16797#18
2309.16797#20
2309.16797
[ "2305.03495" ]
2309.16797#20
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
It demonstrates that all mutation operators are important for Promptbreeder to work, including hyper- mutation operators which lead to self-referential self-improvement. We measured the impact of self-referential operators on all the maths datasets and the ETHOS dataset. Details of the ablation process and its results can be found in Appendix L. Removing any self-referential operator is harmful under nearly all circumstances, the greatest benefit being the initial re-description of task-prompts upon initialization. We only found one mutation operator to be harmful for one specific task: drawing randomly from the set of mutation-prompts upon initialization hurts performance on GSM8K. # 6 CONCLUSION AND FUTURE WORK We introduced PROMPTBREEDER (PB), a self-referential self-improving system that can automati- cally evolve effective domain-specific prompts for a domain at hand. PB is self-referential in that it not only evolves task-prompts, but it also evolves mutation-prompts that govern the way PB modifies task-prompts. Thus, it is not only improving prompts but it also improves the way it is improving prompts. Going forward, it could be interesting to use the LLM itself to assess and promote the diversity of generated prompts (see Zhang et al., 2023a), or to use it to determine the fitness of a whole â thought processâ , e.g. an N-prompt strategy where prompts are conditionally applied rather than unconditionally applied as in Promptbreeder. For example, a more complex â thought processâ is to use PB in self-play mode to evolve pre-prompts for LLM-based policies that compete with each other, i.e., in a competitive Socratic5 dialog. PB remains limited compared to the open-endedness of human thought processes. First, the topology of prompting remains fixed (see Figure 2)â we only adapt the prompt content not the prompting al- gorithm itself. One interpretation of thought is that it is a reconfigurable open-ended self-prompting process. If so, how does one develop complex thought strategies? Clearly it is necessary to generate and evaluate them, and whilst a simple evolutionary process provides one framework in which a thought strategy could be evolved, our actual human experience suggests multiple overlapping hi- erarchical selective processes at play. Moreover, in addition to language, human thought involves intonation, imagery, etc., in a multimodal system.
2309.16797#19
2309.16797#21
2309.16797
[ "2305.03495" ]
2309.16797#21
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
We believe PB points to an exciting future where increasingly open-ended self-referential self- improvement systems can directly use language as the substrate for improvement instead of relying on any parameter updates. This is intriguing, as this approach will likely continue to scale with ever larger and more capable LLMs in the future. # 5https://princeton-nlp.github.io/SocraticAI/ 9 # ACKNOWLEDGMENTS We thank Edward Hughes and Tom Schaul for feedback on an early draft of the paper. We also thank Tom Schaul, Chengrun Yang, and Denny Zhou for fruitful discussions, as well as Gavin Buttimore, Simon Green, Keith Anderson, Joss Moore, Ollie Purkiss, John Quan, and Francesco Visin for their support in running some of the experiments. # REFERENCES Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Brad- bury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christo- pher A.
2309.16797#20
2309.16797#22
2309.16797
[ "2305.03495" ]
2309.16797#22
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Choquette-Choo, Aakanksha Chowdhery, Cl´ement Crepy, Shachi Dave, Mostafa De- hghani, Sunipa Dev, Jacob Devlin, Mark D´ıaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy Gur-Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Mar- cello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yun- han Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. PaLM 2 Technical Report, September 2023.
2309.16797#21
2309.16797#23
2309.16797
[ "2305.03495" ]
2309.16797#23
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, and Torsten Hoe- fler. Graph of thoughts: Solving elaborate problems with large language models. CoRR, abs/2308.09687, 2023. doi: 10.48550/arXiv.2308.09687. URL https://doi.org/10. 48550/arXiv.2308.09687.
2309.16797#22
2309.16797#24
2309.16797
[ "2305.03495" ]
2309.16797#24
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhari- wal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agar- wal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCan- dlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
2309.16797#23
2309.16797#25
2309.16797
[ "2305.03495" ]
2309.16797#25
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Language models are few-shot learners. In Hugo Larochelle, Marcâ Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html. Angelica Chen, David M. Dohan, and David R. So. Evoprompting:
2309.16797#24
2309.16797#26
2309.16797
[ "2305.03495" ]
2309.16797#26
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Language models for code-level neural architecture search. CoRR, abs/2302.14838, 2023. doi: 10.48550/arXiv.2302.14838. URL https://doi.org/10.48550/arXiv.2302.14838. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks, November 2022. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. CoRR, abs/2110.14168, 2021. URL https://arxiv.org/abs/2110.14168. 10 Richard Dawkins. 13 - The evolution of evolvability. In Sanjeev Kumar and Peter J. Bentley (eds.), On Growth, Form and Computers, pp. 239â 255. Academic Press, London, January 2003. ISBN 978-0-12-428765-5. doi: 10.1016/B978-012428765-5/50046-3. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pp. 4171â 4186. Association for Computational Linguistics, 2019. doi: 10.18653/v1/n19-1423. URL https://doi.org/10.18653/v1/n19-1423. Alexander Gajewski, Jeff Clune, Kenneth O. Stanley, and Joel Lehman.
2309.16797#25
2309.16797#27
2309.16797
[ "2305.03495" ]
2309.16797#27
Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution
Evolvability ES: scalable and direct optimization of evolvability. In Anne Auger and Thomas St¨utzle (eds.), Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2019, Prague, Czech Republic, July 13-17, 2019, pp. 107â 115. ACM, 2019. doi: 10.1145/3321707.3321876. URL https: //doi.org/10.1145/3321707.3321876. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant.
2309.16797#26
2309.16797#28
2309.16797
[ "2305.03495" ]