id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2307.16877#85 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Table 8: Results on 100 samples per dataset that were manually annotated for correctness w.r.t. information need. The best score is highlighted in bold and the second best is specified with underline. Traditional evaluation metrics like EM and F1 vary significantly from human evaluation in their judgments. K-F1 â K-Precision â K-BertS (F1) â FaithCritic â Q2 (F1) â Q2 (NLI) â LLMCritic â Dataset Model NQ GPT-3.5 Flan-T5 Alpaca-7B 24.57 4.81 9.89 76.53 96.15 80.64 86.70 81.01 83.01 75.85 19.28 33.37 54.44 39.06 35.54 60.77 40.83 37.23 94.87 89.74 82.05 HotpotQA GPT-3.5 Flan-T5 Alpaca-7B 15.91 3.03 11.50 84.7 95.74 87.55 84.05 78.61 83.12 54.23 29.81 49.09 54.93 33.84 45.63 63.21 36.88 51.95 100.00 100.00 87.23 TopiOCQA GPT-3.5 Flan-T5 Alpaca-7B 29.52 24.09 24.58 80.80 91.94 77.76 87.69 86.70 86.31 74.64 59.61 57.56 65.25 69.16 50.89 70.92 73.97 56.62 97.96 93.88 89.8 Human Eval â 88.46 92.31 83.33 96.81 100.00 93.62 88.78 93.88 79.59 Table 9: Faithfulness results on 50 manually annotated samples. The best score is highlighted in bold and the second best is specified with underline. | 2307.16877#84 | 2307.16877#86 | 2307.16877 | [
"2201.08239"
]
|
2307.16877#86 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Error Type: Incomplete Reference Answers - Open-ended Questions Dataset: TopiOCQA Question: does she (J.K. Rowling) donate to any charity? Reference answer: Yes - In 2002, Rowling contributed a foreword to â Magicâ , an anthology of fiction published by Bloomsbury Publishing, helping to raise money for the National Council for One Parent Families. Alpacaâ s response: Yes, she does. She is a member of both English PEN and Scottish PEN, and has donated the proceeds from several of her books to various charities. | 2307.16877#85 | 2307.16877#87 | 2307.16877 | [
"2201.08239"
]
|
2307.16877#87 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Error Type: Incomplete Reference Answers - List of Named Entities Dataset: Natural Questions Question: what kind of animals live in the great victoria desert Reference answer: malleefowl; water-holding frog; great desert skink; sandhill dunnart; dingo; sand goanna; crest-tailed mulgara; southern marsupial mole; chestnut-breasted whiteface; perentie Flan-T5â s response: lizard. Error Type: Intrinsic Ambiguity in Questions Dataset: Natural Questions | 2307.16877#86 | 2307.16877#88 | 2307.16877 | [
"2201.08239"
]
|
2307.16877#88 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Question: who played in the stanley cup finals last year Reference answers: Nashville Predators, Pittsburgh Penguins GPT-3.5â s response: The Washington Capitals and the Vegas Golden Knights played in the Stanley Cup Finals last year (2018). Figure 11: Examples where GPT-4Eval predictions differ from human judgement (alongside F1 < 0.3). # Dataset: NQ Question: points on a sphere or angles in a circle are measured in units called Reference answer: radians Responses: Alpaca: radians Flan-T5: | 2307.16877#87 | 2307.16877#89 | 2307.16877 | [
"2201.08239"
]
|
2307.16877#89 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | I donâ t know GPT-3.5: degrees Irrelevant knowledge: Title: Inscribed angle inscribed angle theorem states that an angle θ inscribed in a circle is half of the central angle 2θ that subtends the same arc on the circle. Therefore, the angle does not change as its vertex is moved to different positions on the circle... Question: who played the colorado kid in rio bravo Reference answer: Ricky Nelson Responses: Alpaca: The role of the Colorado Kid was played by John Dehner in the 1959 film Rio Bravo. | 2307.16877#88 | 2307.16877#90 | 2307.16877 | [
"2201.08239"
]
|
2307.16877#90 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Flan-T5: Jim J. Bullock GPT-3.5: I donâ t know. Irrelevant knowledge: Title: Jim J. Bullock Jim J. Bullock James Jackson Bullock (born February 9, 1955), sometimes credited as Jm J. Bullock, is an American actor and comedian of stage, television and motion pictures... Bullock became a notable entertainment figure in the 1980s Dataset: HotpotQA Question: | 2307.16877#89 | 2307.16877#91 | 2307.16877 | [
"2201.08239"
]
|
2307.16877#91 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Ralph Hefferline was a psychology professor at a university that is located in what city? Reference answer: New York City Responses: Alpaca: Columbia, Missouri. Flan-T5: Columbia GPT-3.5: I donâ t know. Irrelevant knowledge: Title: University of Missouri The University of Missouri (also, Mizzou, or MU) is a public land-grant research university located in Columbia, Missouri, U.S. It was founded in 1839 ... | 2307.16877#90 | 2307.16877#92 | 2307.16877 | [
"2201.08239"
]
|
2307.16877#92 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Dataset: TopiOCQA Question: User: what is the location of mars in the solar system Agent: Mars is the fourth planet from the Sun User: is it known by any other names? Reference answer: Red Planet Responses: Alpaca: No, it is known by its official name, Mars. Flan-T5: Mars is also known as the Red Planet. GPT-3.5: I donâ t know. Irrelevant knowledge: Title: Haboob Global dust storms on Mars have been compared to haboobs on Earth. Figure 12: Failure examples of the models in generating I donâ t know in the presence of irrelevant knowledge. # Dataset: NQ | 2307.16877#91 | 2307.16877#93 | 2307.16877 | [
"2201.08239"
]
|
2307.16877#93 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Question: points on a sphere or angles in a circle are measured in units called Reference answer: radians Responses: Alpaca: 1. radians Flan-T5: radians GPT-3.5: I donâ t know. Relevant knowledge: Title: Position angle ... It is calculated in a plane that contains the sphere center and the great circle, where is the angular distance of two points viewed from the center of the sphere, measured in radians... Question: who had a baby at 100 in the bible Reference answer: 1. Sarah, 2. Abraham Responses: Alpaca: 100 year old Sarah had a baby with Abraham. | 2307.16877#92 | 2307.16877#94 | 2307.16877 | [
"2201.08239"
]
|
2307.16877#94 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Flan-T5: I donâ t know. GPT-3.5: Abraham had a baby at 100 in the Bible. Relevant knowledge: Title: The Bible and humor ... Macy says that in speaking of humor in the Bible he usually begins with asking how it might feel to personally get the message Abraham and Sarah received, putting them into the absurd situation of a ninety year old having a baby... Dataset: HotpotQA Question: Roger O. Egeberg was Assistant Secretary for Health and Scientific Affairs during the administration of a president that served during what years? Reference answer: 1969 until 1974 Responses: Alpaca: 1969-1974. Flan-T5: 1969 until 1974. GPT-3.5: I donâ t know. Relevant knowledge: Title: Roger O. Egeberg ... | 2307.16877#93 | 2307.16877#95 | 2307.16877 | [
"2201.08239"
]
|
2307.16877#95 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | His other roles included Assistant Secretary for Health and Scientific Affairs in the Department of Health, Education, and Welfare (now the United States Department of Health and Human Services) during the Nixon administration and Dean of the University of Southern Californiaâ s medical school. - Title: Richard Nixon Richard Milhous Nixon (January 9, 1913 â April 22, 1994) was the 37th President of the United States from 1969 until 1974, ... Dataset: TopiOCQA Question: | 2307.16877#94 | 2307.16877#96 | 2307.16877 | [
"2201.08239"
]
|
2307.16877#96 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | User: how much did the blues brothers movie cost to make Reference answer: $17.5 million Responses: Alpaca: 17.5 million Flan-T5: $17.5 million GPT-3.5: I donâ t know. Relevant knowledge: Title: The Blues Brothers (film) Principal photography began in July 1979, with the filmâ s budget still not settled. For the first month, things ran smoothly on and off the set. When Weiss saw the supposedly final $17.5 million budget, he reportedly joked, "I think weâ ve spent that much already." ... Figure 13: | 2307.16877#95 | 2307.16877#97 | 2307.16877 | [
"2201.08239"
]
|
2307.16877#97 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Failure examples of the models in generating informative responses in the presence of relevant knowledge. | 2307.16877#96 | 2307.16877 | [
"2201.08239"
]
|
|
2307.16364#0 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | # Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators Paul Denny The University of Auckland Auckland, New Zealand [email protected] Juho Leinonen The University of Auckland Auckland, New Zealand [email protected] James Prather Abilene Christian University Abilene, TX, USA [email protected] Thezyrie Amarouche University of Toronto Scarborough Toronto, ON, Canada [email protected] Andrew Luxton-Reilly The University of Auckland Auckland, New Zealand [email protected] Brett A. Becker University College Dublin Dublin, Ireland [email protected] | 2307.16364#1 | 2307.16364 | [
"2306.04556"
]
|
|
2307.16364#1 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | 3 2 0 2 Brent N. Reeves Abilene Christian University Abilene, TX, USA [email protected] l u J 1 3 ] C H . s c [ # ABSTRACT involving many small problems [1, 7] or through scaffolding via ac- tivities such as Parsons problems [10, 11]. However, LLMs are now capable of producing code automatically and have demonstrated im- pressive performance on problems that are typical in introductory programming courses [12, 13, 28]. In addition to the opportunities they present, educators have voiced concerns around the poten- tial misuse of these models for plagiarism, and over-reliance on AI-generated code by beginners [3], leading to a possible erosion of traditional coding skills [9]. New pedagogical approaches are needed to develop the changing skillsets that students require in the era of generative AI [5]. With their remarkable ability to generate code, large language mod- els (LLMs) are a transformative technology for computing education practice. They have created an urgent need for educators to rethink pedagogical approaches and teaching strategies for newly emerging skill sets. Traditional approaches to learning programming have focused on frequent and repeated practice at writing code. The ease with which code can now be generated has resulted in a shift in fo- cus towards reading, understanding and evaluating LLM-generated code. In parallel with this shift, a new essential skill is emerging â the ability to construct good prompts for code-generating mod- els. This paper introduces a novel pedagogical concept known as a â Prompt Problemâ , designed to help students learn how to craft effective prompts for LLMs. A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem. To support the delivery of Prompt Problems at scale, in this paper we also present a novel tool called Promptly which hosts a repository of Prompt Problems and automates the evaluation of prompt-generated code. We report em- pirical findings from a field study in which Promptly was deployed in a first-year Python programming course (ð | 2307.16364#0 | 2307.16364#2 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#2 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | = 54). We explore stu- dent interactions with the tool and their perceptions of the Prompt Problem concept. We found that Promptly was largely well-received by students for its ability to engage their computational thinking skills and expose them to new programming constructs. We also discuss avenues for future work, including variations on the design of Prompt Problems and the need to study their integration into the curriculum and teaching practice. 1 v 4 6 3 6 1 . 7 0 3 2 : v i X r a Teaching students to read and understand code are longstanding goals of introductory courses, and they are becoming increasingly important skills given the ease with which code can be generated by LLM-based tools. An equally important emerging skill is the ability to formulate effective prompts for LLMs to generate code. Recent work has shown that although many typical introductory problems can be solved by LLMs using verbatim textbook or exam problem statements [12, 13], this approach is not always sufficient. For example, manual modification of the prompts to include explicit algorithmic hints greatly improves code-generation performance [30]. In recent work, Denny et al. argue that the ability to engineer effective prompts that generate correct solutions is now an essential skill for students, yet they do not propose concrete approaches for how this skill can be taught [6]. To address this concern, in the current paper we introduce the concept of a â Prompt Problemâ â a new exercise paradigm in which students solve programming exercises by formulating natural lan- guage prompts for code-generating LLMs. Students are presented with a visual representation of a problem that illustrates how input values should be transformed to an output. Their task is to devise a prompt that would guide an LLM to generate the code required to solve the problem. # 1 INTRODUCTION The advent of large language models (LLMs) that can generate code is having a rapid and significant impact on computing education practice, particularly at the introductory level. Traditional peda- gogical approaches have focused on helping students learn how to write code. This is typically achieved through frequent practice In addition to conceptualizing the problem type, we make two other contributions: we introduce a tool for delivering Prompt Problems and we present empirical findings from the use of this tool by introductory progamming students. To understand how Prompt Problems work in practice, we have developed a web-based tool called Promptly. | 2307.16364#1 | 2307.16364#3 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#3 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | This tool displays a problem representation, converts a prompt written by a student to code (via an API call to an LLM), and then executes the code against a suite of test cases. If the code fails to solve the problem, the student must revise and resubmit their prompt. This iterative process continues until the problem is solved. We hypothesize that solving Prompt Problems will help students learn how to craft effective prompts. We begin by presenting an illustrative example of a Prompt Prob- lem, and we draw from the results of a pilot study to demonstrate the difficulty that students commonly face in formulating effec- tive prompts. We then describe the design of our tool, Promptly, for delivering Prompt Problems at scale and we deploy it in an introductory Python programming course (ð | 2307.16364#2 | 2307.16364#4 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#4 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | = 54). We report the results of our analysis of student interactions with the tool and their perceptions of the activity. We also discuss possible variations of the problem design, and suggest ideas for future work. # 2 RELATED WORK It has been less than a year since LLMs began to dominate conver- sations in the computing education community and a little more than that since the first research papers began to emerge in the computing education literature. Early work centered on the capa- bilities of these tools, largely driven by concerns that they would lead to a flood of cheating [23] and the effect that would have on student learning. Sometimes, such work involved comparing LLM and student performance, for example in generating explanations of code [17]. Finnie-Ansley et al. demonstrated that Codex (based on GPT-3) ranked in the top quartile of real introductory program- ming (CS1) students on real exams [12]. A year later Finnie-Ansley et al. extended this work to data structures and algorithms (CS2) exams with very similar results [13]. Other studies on the capa- bilities of LLMs have revealed impressive proficiency in dealing with object-oriented programming tasks [4], Parsons problems [28], and compiler error messages [18]. Many of these explorations also revealed that LLMs are not infallible and can produce solutions that donâ t align with best programming practice [4], struggle with longer and higher-level specifications [12], include unnecessary elements [32], and cause students to become confused reading code that they didnâ t write themselves [14, 27]. Babe et al. showed that LLMs can mislead students, causing them to believe that their own prompts are more (or less) effective than they are in reality [2]. Recently, the focus has started to shift from assessing the capabili- ties of LLMs to using them in teaching and learning practice [21, 24]. Sarsa et al. showed that LLMs can generate viable programming questions including test cases and explanations [29]. Complement- ing this reshaping of the practices of teaching and learning, the importance of details such as context [18] and prompting [6] have begun to emerge. | 2307.16364#3 | 2307.16364#5 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#5 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | For example, White et al. present a prompt struc- turing framework for constructing prompts so they can be applied across problem domains, a catalog of prompts that have been suc- cessfully applied to improve LLM responses, and a demonstration of how prompts can be constructed from patterns and how such patterns can be combined effectively [33]. There is increasing inter- est in understanding the types of prompts that students construct 2 Input Output (fa, 21, (t4, 21, (3, 41] (3, 1]] (la; 2) 31, (9, 2, 31, [4, 5, 6], â _= [4, 5, 6], (7, 8, 91] (7, 8, 11] (Ga 2; 3) 41; ((11, 12, 3, 41, [5:6 7, 8, â > (15, 16, 7, 81, 9; 10) 11, 121; (910,121, 13, 14, 15, 16)] (13, 14, 5, 6]] Ibi, 2, 3, 4; 5), [[19, 20, 4, 51, [6, 7, 8, 9, 10], (24, 25, 9, 101, (11, 12, 13, 14, 15], â = (11, 12, 14, 15], (16, 17, 18, 19, 20], (16, 17, ee (21, 22, 23, 24, 25]] (21, 6, 71] Figure 1: An example Prompt Problem that displays the data visually so that students cannot simply copy and paste the description into an LLM. | 2307.16364#4 | 2307.16364#6 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#6 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | The goal is to swap the top-left and bottom-right non-overlapping quadrants of the matrix. when communicating with LLMs. Babe et al. developed a bench- mark dataset of 1,749 prompts aimed at 48 problems, written by 80 novice Python programming students [2] which can be used by others for LLM benchmarking as well as tool development. A logical next step towards integrating LLMs into teaching prac- tice is developing tools and resources to aid students in effectively working with LLMs for learning. Lao and Guo interviewed 19 in- troductory programming instructors from nine countries across six continents and found that some instructors are embracing the idea of integrating AI tools into current courses via mechanisms such as giving personalized help to students and aiding instructors with time-consuming tasks [16]. MacNeil et al. used LLM-generated code explanations successfully in a web software development e- book [22], and Zingaro and Porter are completing a textbook for teaching introductory programming with Copilot and ChatGPT from day one [26]. Integrating LLMs into computer science courses seems inevitable and stands to transform the way the subject is taught at all levels [5, 31]. We believe that Prompt Problems will be one important step along the journey towards integrating the use of LLMs in computer science education. | 2307.16364#5 | 2307.16364#7 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#7 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | # 3 PILOT STUDY In order to understand how students might try to use LLM tools like ChatGPT to communicate program requirements, we asked a group of graduate students at the University of Auckland to participate in a prompt writing assignment. This assignment took place during a single class session in April 2023. We provided a visual represen- tation of a problem (see Figure 1) and asked participants to query ChatGPT to write a program that could convert the shown input to the shown output. The problem description was provided visu- ally to prevent participants from easily copying and pasting it and, instead, to encourage them to formulate a suitable prompt them- selves. We also asked students to keep a log of their interactions with ChatGPT, organized by their prompt followed by ChatGPTâ s response, etc. in sequential order. | 2307.16364#6 | 2307.16364#8 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#8 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | We hoped that this would provide information about the challenges that students face when using LLMs to write code, and to suggest where a tool might potentially help to address those difficulties. Thirty-six graduate students participated in the pilot study by completing the activity described above and reflecting on it by writing an open-response review of the task. Fifteen participants chose to provide us with their ChatGPT conversations. We discuss these below in Section 3.1. Thirty-one participants shared their reflections on the task, which we discuss in Section 3.2. # 3.1 Interactions Listing 1 shows an example of a student in the pilot study going back-and-forth with ChatGPT when trying to solve the problem. | 2307.16364#7 | 2307.16364#9 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#9 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Notice the â conversationalâ beginning and the subsequent iterative refinement. Each successive prompt attempts to refine the code that is already generated, rather than modify the original prompt to be more complete. We expected computer science graduate students to have very few problems writing effective prompts, however several interesting trends emerged from this activity. First, many students began with an incomplete prompt, starting the conversation with a greeting and no information on the code they wanted the LLM to produce. In each of these cases, what followed was a series of iterative attempts by the participant to incrementally clarify their intent, rather than rewriting the original prompt. Several partici- pants also struggled with understanding the problem and therefore wrote poorly formed prompts with little chance of achieving the desired result. In situations like these, many student conversations with ChatGPT were full of requests for ineffectual changes. Second, participants would prompt ChatGPT for code to solve the problem, paste the code into the automated assessment tool used in the exercise to check correctness, and then paste the failed test cases back into ChatGPT verbatim without any additional prompts or context. Participant #12, after several prompts that resulted in code that was not quite complete, told ChatGPT: â This is very wrong, do the same as you had before but make sure that the top right quarter and the bottom left quarter are not edited by the function.â ChatGPT responded with an apology and more almost-correct code. Third, some students misunderstood the assignment altogether, such as Participant #13 who initially prompted ChatGPT: â | 2307.16364#8 | 2307.16364#10 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#10 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | give me a python 3 swap function which swap the the elements along the diagonalâ . After taking the code generated by ChatGPT and submitting it to the automated assessment tool used in the activity, they tried another incorrect prompt: â give me a python 3 swap function which works by first swapping the elements of each row in place, and then swapping the elements of each column in place.â These examples indicate that many students, even ones many years into their programming education, do not necessarily un- derstand how to write effective prompts. They could benefit from explicit prompt writing practice that could teach them to under- stand the problem, write a single thorough prompt, and check the code generated by the LLM as having complete test case coverage. | 2307.16364#9 | 2307.16364#11 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#11 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | # 3.2 Reflections When reflecting on the task in our pilot study, many of the students mentioned that code producing LLMs need supervision to produce correct code. After working with ChatGPT to produce correct code 3 Listing 1 Transcript of a conversation with ChatGPT. Prompt 1: Hi, I want you to write a Python program. ChatGPT response 1: Sure, what would you like the program to do? Prompt 2: I have a square matrix, and I want to swap the first half of the rows with the second half of the rows. ChatGPT response 2: | 2307.16364#10 | 2307.16364#12 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#12 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Hereâ s a Python program that should do what youâ re asking for: def swap_matrix_rows(matrix): n = len(matrix) half_n = n // 2 for i in range(half_n): matrix[i], matrix[half_n + i] = matrix[half_n + i], matrix[i] return matrix Prompt 3: Thatâ s not exactly what I meant. I actually want to swap the first half of the elements in each row with the corresponding elements in the second half of the rows. | 2307.16364#11 | 2307.16364#13 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#13 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | ChatGPT response 3: No problem, hereâ s an updated version of the Python program that should swap the first half of the elements in each row with the corresponding elements in the second half of the rows: def swap_matrix_elements(matrix): n = len(matrix) half_n = n // 2 for i in range(half_n): for j in range(len(matrix[i]) // 2): matrix[i][j], matrix[half_n + i] [j - len(matrix[i]) // 2] = matrix[half_n + i] [j - len(matrix[i]) // 2], matrix[i][j] return matrix | 2307.16364#12 | 2307.16364#14 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#14 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Prompt 4: Thatâ s not quite what I had in mind either. What I want to do is swap the first half of the columns in the first half of the rows with the last half of the columns in the last half of the rows. ChatGPT response 4: My apologies, hereâ s an updated version of the Python program that should swap the first half of the columns in the first half of the rows with the last half of the columns in the last half of the rows: def swap_matrix_elements(matrix): n = len(matrix) half_n = n // 2 half_m = len(matrix[0]) // 2 for i in range(half_n): for j in range(half_m): matrix[i][j], matrix[half_n + i] [half_m + j] = matrix[half_n + i] [half_m + j], matrix[i][j] return matrix | 2307.16364#13 | 2307.16364#15 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#15 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Prompt 5: Oops, sorry, I wasnâ t clear. I actually want to swap the first half of the elements in the first half of the rows with the corresponding elements in the last half of the rows. run against test cases, many students realized that writing code this way required a different kind of critical thinking. Instead of thinking through how to structure code to properly address the requirements, users instead need to carefully read and test the code generated for them to ensure it precisely fits their needs. | 2307.16364#14 | 2307.16364#16 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#16 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Participants recognized Visual Learner LLM generates Code representation enters code from executed of problem prompt prompt against tests that training was needed â both training of novice students in how to use LLMs so that they are more of a help than a hindrance, and training of the models so that they provide responses targeted to novice learners. For instance, participants said: P11: â [It] made me reflect on the importance of critical thinking and proper supervision in using these models.â P19: â I think itâ | 2307.16364#15 | 2307.16364#17 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#17 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | s a double-edged sword in itself. Perhaps they need more supervision and training in the education field before they can become the best study companion for students outside of the classroom.â Figure 2: The Promptly tool implements a simple variation of Prompt Problems in which the code generated by the LLM is automatically executed against a set of test cases and can not be edited directly. In order to modify the code, the learner is directed to edit the prompt. Other students realized that prompt engineering to create code is a different kind of programming compared to how they learned. Co- ercing an LLM to generate correct code will need to be an iterative exercise in prompt refinement, not prompt conversation. Telling the LLM to revise its previous code, built from a previous prompt, may or may not work due to the way tools like ChatGPT maintain conversational context. Examples of this from participants can be seen below: text-davinci-003 which, although now officially a legacy model, is less likely to generate superfluous text and comments in the re- sponses. We found that the gpt-3.5-turbo model requires significant additional prompting to increase the likelihood of generating only executable code, but that relying on prompting alone can be unre- liable. Future work will explore additional filtering approaches in order to transition to this newer model. All relevant data, including prompts, responses and testing outcomes is stored using Firestoreâ s NoSQL database. | 2307.16364#16 | 2307.16364#18 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#18 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | P12: â I also found it interesting how difficult it was to get ChatGPT to write a somewhat simple program. I understand now that it is almost like a different kind of programming to get it to do what you want, and I can see the potential uses in education.â P15: â The most difficult part of this exercise is to properly instruct ChatGPT so that it could fully understand the requirements. ChatGPT appears to be â polite but stubbornâ , as it would generate code that could be run successfully but does not produce the correct output. When asking ChatGPT for a fix, it would politely provide a new snippet of code but the new code didnâ t effectively fix the problem, and sometimes there were no modifications made to the new code at all.â These representative samples from the reflections by students indicated to us that learning how to successfully write prompts would need to be a skill taught explicitly in introductory program- ming courses, alongside other skills that are traditionally taught. We propose the idea of Prompt Problems to address this new gap in programming education. | 2307.16364#17 | 2307.16364#19 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#19 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | # 4.1 Tool Design Within the Promptly tool, sets of Prompt Problems are organized into course repositories from which students can select after logging in. Each Prompt Problem within a course repository consists of a visual representation of a problem â that is, an image that does not include a textual description of the problem â and a set of associated test cases that are used to verify the code that is generated by the LLM. Each set of Prompt Problems for a given course are presented in order, and a student can navigate through these using â | 2307.16364#18 | 2307.16364#20 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#20 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Backâ and â Nextâ buttons (see Figure 3). Once a Prompt Problem is selected, the student is shown the visual representation of the problem, and a partial prompt to complete. For problems where the solution is a Python program, this partial prompt begins: â Write a Python program that...â , which provides guidance to the student. If the problem requires students to write a single function, then the partial prompt is: â Write a Python function called...â . As soon as any text for extending the prompt is entered by the student, the â Click here to ask ChatGPT!â button is enabled. Clicking this button constructs a prompt that is sent to the LLM. This prompt consists of the verbatim text entered by the student, as well as some additional prompting to guide the model to produce only code and no additional explanatory text. # 4 A TOOL FOR DELIVERING PROMPT PROBLEMS AT SCALE: PROMPTLY We have developed a web-based tool called Promptly to support one particular variation of Prompt Problems, in which the code generated by the LLM is not editable by the learner (see Figure 2). Other variations of Prompt Problems are possible and we discuss these in Section 6.1. | 2307.16364#19 | 2307.16364#21 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#21 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Our concrete implementation of the tool uses React and NodeJS as its key frameworks, and Material design for the styling of UI components. The client-side React implementation is accessible via Firebase Hosting, and the Express (NodeJS) backend is powered by Firebase Functions, operating within a serverless framework. The backend communicates with OpenAIâ s API and transmits responses to a JobeInABox1 sandbox which is hosted on an EC2 AWS instance. We explored the use of several specific OpenAI models, including text-davinci-003 and gpt-3.5-turbo. Our current implementation uses Once the code response is received from the LLM, it is then sent to a sandbox for execution against the set of test cases. We use the publicly available sandbox associated with the CodeRunner tool [19]. If the generated code passes all of the test cases for the prompt problem, then the student receives a success message and is directed to progress to the next problem. If any of the test cases fail, then the first failing test case is shown to the student. | 2307.16364#20 | 2307.16364#22 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#22 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | At this 1github.com/trampgeek/jobeinabox 4 © Exercise #1 @ ciass Resistration Enter your name: Bob Hello Bob Visual representation of <________ problem (in this case, an animation illustrates user interaction with program) Write me a Python program that asks the user to enter their name, and then prints the word "Hello" followed by a space, followed by their name ChatGPT response: Hello " + input( Code Running response: <_â _ â â | 2307.16364#21 | 2307.16364#23 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#23 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | \ Prompt entry 4 LLM response Execution output (in this case, a success message as all tests pass) Figure 3: Interface layout for a Prompt Problem within the web-based Promptly tool (with figure annotations added in blue). point, they are able to edit the prompt and resubmit in order to generate a new code response. Figure 3 shows a screenshot of the tool interface once the learner has logged in and selected their course. The following instructional message is shown but not included in the screenshot: â Your task is to view the visual representation of the problem and then type a prompt which describes the task sufficiently well for the language model to generate a correct solution in Python. If the code that is generated is not correct, you will see test output below the coding area and you can try again by modifying the prompt!â | 2307.16364#22 | 2307.16364#24 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#24 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | . In the screenshot in Figure 3, the first problem in a sequence of three problems for the course is shown, and has just been solved by the learner. Enter a number: 3 Child Below 8 Child 8-12 Tween 13-19 Teenager 20 or above Adult Below 8 Child 8-12 Tween 13-19 Teenager 20 or above Adult # 4.2 Classroom Evaluation Prompt Problems are a novel type of activity for learners in an introductory programming course, and so we are interested in un- derstanding their perceptions of the Promptly tool, and on their interactions with it when solving problems. We organise our inves- tigation of the way students use Promptly around the following two research questions: RQ1: How do students interact with the Promptly tool in terms of overall success rates and on the lengths of the prompts they construct? RQ2: What are studentsâ perceptions of the Promptly tool and on learning programming through constructing prompts for LLMs? | 2307.16364#23 | 2307.16364#25 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#25 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Figure 4: Producing a categorization based on age. To explore these questions, we deployed Promptly as an un- graded (i.e. optional) laboratory task in a large introductory Python programming course taught at the University of Auckland. Students in this course typically have no prior programming experience. The lab was conducted in the second week of the course, at which point students were writing single-file scripts, without the use of func- tions, and had learned about standard input and output, arithmetic, and conditional statements. Three problems were available on Promptly for students to attempt. The first of these was the problem previously illustrated | 2307.16364#24 | 2307.16364#26 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#26 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | 5 Enter five decimal numbers (separated by spaces): 2.0 3.0 3.0 3.0 4.0 8.0+7.5+9.0 17 7 mp 8. 6.5+7.0+6.0 3 ==> 6.5 Figure 5: Calculating the average of the â middleâ values out of a set of five values (using the metaphor of judges scoring an athletic competition, where the highest and lowest values are excluded). | 2307.16364#25 | 2307.16364#27 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#27 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | in Figure 3, where the goal was to write a program that would ask the user to enter their name, accept input from the command line, and then display â Hello â followed by the name as standard output. The other two problems are illustrated in Figures 4 and 5. The second problem (Figure 4) required a program that accepts an integer input from the user representing an age, and then prints a textual cateogrization of that age. The third problem (Figure 5) required a program that accepted five floating point inputs and then calculated the average of the three middle values (i.e. after removing the maximum and minimum values). For all three problems, the visual representation of the problem included a short animated image (â ¼10 second duration), shown as a command-prompt style window. The command-prompt anima- tion illustrated entry of user input, one key at a time, and then the subsequent display of output from the program. For the second and third problems, the visual representation also included a corre- sponding image that highlighted several pairs of inputs with their corresponding output. In terms of interactions with the tool (RQ1) we calculate, for each of the three problems, the average number of prompt submissions that were required to solve it, the number of students who were successful, and the average number of words used in the submitted prompts. To investigate student perceptions of the activity and tool (RQ2), students were invited to provide feedback on their experience using Promptly. This feedback was not graded, and was given in response to the following prompt: â We would appreciate hearing about your experiences completing the exercises and in particular, how you think the experience of writing prompts may help you to learn programmingâ | 2307.16364#26 | 2307.16364#28 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#28 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | . # 5 RESULTS Our study was conducted in July 2023, and participation by students was optional. A total of 54 students attempted at least one problem on Promptly, which represents approximately 12% of the enrolled students in the course. 6 # 5.1 Student interactions with Promptly We measured several performance indicators around student use of Promptly, such as prompt lengths and number of submissions. As summarized in Table 1, on average participants submitted 2.70 attempts for problem 1, 2.16 submissions for problem 2, and 6.4 submissions for problem 3. On this basis, problem 3 appeared to be the most difficult for students, and this is further supported by student reflections (which are reported in more detail in Section 5.2), with one student commenting: â The instruction for the third exercise is not clear I donâ t get it.â Listing 2 shows three prompts that were submitted by different students attempting problem 3. Some students found it difficult to infer the goal from the problem representation. For example, the first prompt shown in Listing 2 is an example where the student has incorrectly inferred that values included in the average calculation should be sufficiently close to their predecessors. Trying to generate code for the wrong problem can be frustrating, which may explain the following part of the prompt written by the student: â If the user has not provided numbers that sufficiently meet this criteria, call them an idiotâ | 2307.16364#27 | 2307.16364#29 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#29 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | . In the second example in Listing 2, the student has not attempted to provide a prompt that demonstrates they have understood what the problem is asking, but instead they have created a prompt that simply parrots back to the tool the three example tests cases shown in the problem description. The student then asks the model: â Can you please replicate this program?â . The student submitted this prompt four times in a row, but all attempts were unsuccessful. Finally, the third example in Listing 2 is the shortest successful prompt that was submitted for this problem. Overall, the average number of words in prompts for each of the problems was 13, 38, and 36, respectively. The number of students that solved the problems was 43, 32, and 19, respectively. Figures 6, 7 and 8 illustrate, for each of the three problems, trends regarding how the average word count of prompts, and the number of students writing them, change across subsequent submissions. For example, the left most column in Figure 6 shows that 54 students made an initial submission to this task and that on average, the word length of all of these submitted prompts was 15. As students either solve or abandon the problem, fewer students make subsequent submissions. Comparing the three problems, prompt lengths tend to decrease over subsequent submissions for problems 1 and 2, but tend to slightly increase for problem 3. | 2307.16364#28 | 2307.16364#30 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#30 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Table 1: Summary of usage of Promptly. For each question, the average number of submissions required to solve the problem is shown, along with the number of students who successfully solved the problem and the average number of words in prompts submitted. Problem id Average submissions Students Average words solved 1 2 3 2.7 2.2 6.4 43 32 19 13 38 36 QI: Average Word Count and Number of Students who Submitted © Average Word Count @ Number of Students Who Submitted 60 123 4 5 6 7 8 9 O H 2 B 4 IS 16 17 18 19 20 # Figure 6: The average number of words in each subsequent submission for problem 1 compared to the number of partic- ipants that submitted. Q2: Average Word Count and Number of Students who Submitted © Average Word Count @ Number of Students who Submitted Figure 7: The average number of words in each subsequent submission for problem 2 compared to the number of partic- ipants that submitted. Q3: Average Word Count and Number of Students who Submitted © Average Word Count @ Number of Students who Submitted o 44 44 44 ° 5 10 15 2 25 Figure 8: The average number of words in each subsequent submission for problem 3 compared to the number of partic- ipants that submitted. 7 Listing 2 Three student-submitted prompts for Problem 3 Misinterpreting the problem: Write me a Python program that does the following: 1. Prompts the user to enter five decimal numbers (1dp) between 1.0 and 10.0 separated by spaces. 2. Chooses three of these numbers using the following rules: a number chosen be different from the previously chosen numbers and each subsequently chosen value must be within 0.5 of its predecessor. If the user has not provided numbers that sufficiently meet this criteria, call them an idiot and prompt them for another five values. 3. Find the average of these numbers and round the result to 2dp. Precede this result with the numbers chosen. Parroting the tests: A Python program requests the user "enter five decimal numbers (separated by spaces)". | 2307.16364#29 | 2307.16364#31 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#31 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | In the first example the user inputs the five numbers 2.0 3.0 3.0 3.0 4.0 to which the program outputs 3.0. In the second example the user inputs the five numbers 8.0 9.5 7.5 6.0 9.0 to which the program outputs 8.17 . In the third example the user inputs the five numbers 4.0 6.5 8.0 7.0 6.0 to which the program outputs 6.5. Can you please replicate this program? | 2307.16364#30 | 2307.16364#32 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#32 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Successful: Write me a Python program that takes five decimal number separated by spaces, and outputs the average of the 3 median numbers as a decimal rounded to 2dp. # 5.2 Student reflections on Promptly We analyzed feedback from 58 students who provided a response to the reflection question. This is slightly greater than the number of students who used Promptly, but as we report below, some students indicated a resistance to using LLMs for generating code and thus responded to the reflection statement without using the tool. We report the main themes that emerged from our analysis below. 5.2.1 Exposure to new coding constructs. Given that our evaluation was conducted early in the course, the code that was generated would sometimes contain features that were unfamiliar to students. For the most part, students commented positively on this aspect, and a theme emerged around the way the tool introduced students to new programming constructs and techniques. As one student commented: â These exercises introduced me to new functions... so this method of writing code could help increase my programming vocabularyâ . Another response aligning with this theme was: â Hon- estly that was pretty cool, I like the way it works and how we can see the code afterwards. Even if we donâ t know how to code some of the features, knowing the steps and then seeing how itâ s meant to be done is very helpfulâ . One student commented on the value of seeing both the structure and syntax of the code generated by the LLM: â The main benefit I gained from using ChatGPT in this environment was observing the logical structure of the programs that it created to fulfill the established requirements. In all three cases it used functions that I was previously unaware of, allowing me to gain an understanding of how they could be used and the correct syntax for implementing them.â 5.2.2 Enhancement of computational thinking. We also found stu- dents valued the way in which the tool challenged them to think carefully about how to solve a problem and communicate precise specifications: â You would have to understand what the question is asking to be able to give a working prompt to the AI, so it seems very good for making you think properly about the questionâ | 2307.16364#31 | 2307.16364#33 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#33 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | . Writing clear prompts can involve communicating problem solving steps, and thus draws on computational thinking skills. This is illustrated well by the quote: â Writing prompts helps learn programming by making you have to think about what the task at hand and to split it up into segments of which you need to describe to the AI... I would say that this would help students with the act of breaking down a big project into little tasks.â . Another similar response highlighted how the tool helped in visualizing the problem solving process: â Writing the prompts can help you with visualizing the steps required in the programmingâ | 2307.16364#32 | 2307.16364#34 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#34 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | . 5.2.3 General Positive Feedback. Around one-third of the partici- pants expressed generally positive sentiments about the Promptly tool, and this was the most common theme overall. Representative comments include: â I think that is was a good for practicing asking AIâ and â Asking AI to write promps help a lot in programming!!!â . One student who viewed the tool positively, also expressed some concern about the power of the underlying code-generating mod- els: â It is absolutely a great tool, however in this regard itâ | 2307.16364#33 | 2307.16364#35 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#35 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | s kind of terrifying. It was able to process my instructions fluently.â Some students also commented more generally about the impor- tance of learning how to use generative AI responsibly in computing courses, and the value of having explicit approaches for teaching this. For example, â I think it is very smart ... to work on a way to integrate and teach a responsible usage of machine learning!â and â I think it would be very useful to have a section of a course that teaches how to use prompts properly and provide a better understanding of AI tools.â | 2307.16364#34 | 2307.16364#36 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#36 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | 5.2.4 Resistance and negative feedback. Although less common, we found that some students appeared resistant to using the tool, citing fears about potential impacts on their creativity. One student expressed: â I donâ t have much intention of using ChatGPT at the moment as I major in design and I have a strong belief in personal creativityâ . Another was more blunt: â I refuse to use chatGPT for pro- grammingâ . Over-reliance on AI generated outputs is a commonly cited concern within the education community, and several stu- dents commented on this aspect, including: â it is critical for students to learn the ability to write code independently rather than relying only on AI-generated answersâ and â I feel like it is too tempting of a tool to use through the labs and not learn and develop these skills yourselfâ . Further exploring these concerns would be an interesting avenue for future work. | 2307.16364#35 | 2307.16364#37 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#37 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Overall, while most students reported finding Promptly benefi- cial, particularly for exposure to new programming constructs and for strengthening computational thinking skills when communicat- ing a problem, a minority of students were hesitant about the use of generative AI tools for learning programming. # 6 DISCUSSION Beginning typically with very small problems in CS1 and proceed- ing on to semester- or year-long applied problems in software engineering, CS curricula expose students to both top-down and 8 bottom-up approaches. | 2307.16364#36 | 2307.16364#38 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#38 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | The problems presented in Promptly can be considered to be â bottom-upâ , since students start with input- output pairs and have to infer a problem description. And yet, the prompts that the students write can be considered â top-downâ since the system requires students to abstract the meaning of the input- output pairs into English sentences and not code. Students need to understand the problem before they can correctly generate prompts that cause the LLM to produce correct code. In contrast to other tools students use, such as compilers, learn- ing to use LLMs presents unique challenges. Although the literature continues to document the difficulty students have with compiler error messages, one thing we have never worried about teaching students is that compilers might sometimes just get it wrong. In con- trast, at this point in time, LLMs sometimes generate answers that are syntactically and semantically incorrect. Deliberate exposure to the inconsistencies of outputs generated by LLMs can serve to highlight the importance of a â critical eyeâ in evaluating generated code and may help to moderate the potential for over-reliance on these tools. The use of LLMs to generate code from prompts places the responsibility for ensuring correctness on the user, so adequate testing becomes more important. Future tools that focus on prompt generation would benefit from the integration of user-generated tests to ensure students are explicit about the program require- ments when they create prompts. Tasking students with generating test cases (before writing code) has previously been studied as an approach to help improve problem understanding [8, 25]. It is worth noting that our tool does not provide instruction for students about how to create prompts effectively. It merely requires them to complete a task that involves prompt creation. This is aligned with most other Automated Assessment Tools that provide assessment tasks to support learning, but little explicit instruction [15]. Neither the students in our pilot study nor those in our classroom evaluation of the tool were taught prompt creation, so currently we have focused on studentsâ intuitions around prompt generation. Future work will explore how to more directly teach students to generate prompts in structured ways to determine if such instruction positively impacts their performance using tools that assess prompt generation. Although the current system evaluates prompt effectiveness in producing correct programs, it does not evaluate the efficiency of the prompts. Our unit tests consider only whether the given inputs are translated to the expected outputs. | 2307.16364#37 | 2307.16364#39 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#39 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | A prompt could include irrelevant words and generate irrelevant code constructs, and as long as it still translates the given inputs to the expected outputs, our system will treat the task as completed successfully. Future work must address how to go beyond effective prompts to efficient (and effective) prompts. # 6.1 Variations Prompt Problems are a class of problems where learners generate prompts that are given to LLMs to produce code. There are various ways that such problems can be implemented, and several con- siderations for designing them. Our tool currently makes certain implementation trade-offs. It does not allow dialogue, it does not allow students to edit the code that is generated by the LLM, and it evaluates only a single solution at a time. 6.1.1 No dialogue. ChatGPT interfaces include support for â dia- logueâ . | 2307.16364#38 | 2307.16364#40 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#40 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | This interaction style is natural and easy on the beginner. The reason we did not support this interaction and forced the stu- dent to â start from scratchâ each time is that we were deliberately focusing the student on creating a complete, top-down, problem description. Although it is more cognitively demanding to require the student to provide all relevant information in a single prompt, we were interested in teaching exactly that process. 6.1.2 No access to the code. Although students who are more ad- vanced may find it easier to simply write the code than construct a prompt, our interest is in providing students with experience in generating prompts. For this reason, we did not allow students to edit the code that was generated. We did show the code and stu- dents were able to study the generated code along with the unit test results to modify their prompts for another submission. Our system is not intended to be a realistic IDE for code development, but future tools that support Prompt Problems could allow code editing to provide refactoring practice and a more authentic experience. Single solution generated. LLMs generate different variations 6.1.3 of output for the same input prompt. Our tool does not currently address the possible variation in generated content. Prompts can be brittle, as sometimes the same prompt may work and other times it may not. Non-deterministic behaviour of the models may be frustrating for students, as simply resubmitting a previously unsuccessful prompt may actually work. Nevertheless, this may be a useful learning experience for students, as it helps to highlight this inherent LLM behaviour. A different variation of the current tool design could generate multiple code implementations every time a single prompt is submitted, allowing students to compare them and see which ones satisfy the problem. Viewing multiple correct, but different, implementations of the same algorithm is useful for helping students understand that there are frequently many correct solutions to a problem [20]. Future work could explore how to present this aspect of LLMs to students who are learning to write prompts. # 6.2 Problem Design | 2307.16364#39 | 2307.16364#41 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#41 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Our restriction on â visualâ problem representation is motivated by a desire to prevent students from simply copying-and-pasting into the LLM prompt. The most important feature of the problem representation is that it does not provide the text that can be directly used as a prompt. For problems where the desired code output is a function, listing a set of test cases (input and output pairs) is a convenient representation. In our current implementation of Promptly, this can be achieved with plain text, or with an image to discourage copy-and-paste behaviour, as illustrated in Figure 9. There are several considerations for how to design a Prompt Problem which we discuss here. 6.2.1 Avoiding textual clues. One of the biggest limitations in terms of preparing Prompt Problems is that as an instructor, you have to think about tasks that can be presented visually to learners. Even simple tasks such as â | 2307.16364#40 | 2307.16364#42 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#42 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Order the elements of the list in ascending alphabetical orderâ which can be explained in few words, is quite challenging to convey visually without the use of text. Without 9 counter([@, 2, 3, 4, 5, 6, @]) => 2 counter([10, 20, 30]) => 0 counter([@, 0, 0, @, 999]) => 4 Write me a Python function called counter that returns the number of zero values in an input list. | 2307.16364#41 | 2307.16364#43 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#43 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | CLICK HERE TO ASK CHATGPT! ChatGPT response: Figure 9: A problem in Promptly represented as a set of input-output pairs, where the solution requires generating a function (called â counterâ ) that returns a count of the number of times zero occurs in a list. One possible prompt, and the resulting code that correctly solves the problem, is shown. a textual explanation, students are expected to use inductive rea- soning to determine what problem is being solved from visual examples that typically include specific cases. As we found, this can be quite challenging in some cases. For example, many students found problem 3 challenging in our classroom evaluation, with one commenting on their reflection: â The last question seemed unneces- sarily unclear, I canâ t imagine there will be many instances where the task Iâ m meant to do will be as vague as what was given in question 3â . In this evaluation, we used short animations to illustrate data being entered at the command prompt. Although such animations can convey additional information, they are more difficult to create. | 2307.16364#42 | 2307.16364#44 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#44 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | 6.2.2 Accessibility. Educational resources should be accessible to students with a visual impairment. This is typically satisfied with a text-based description of visual media which can be read aloud. However, if a text-based description of the image is provided, then this may either (a) be sufficiently descriptive of the problem that it could be passed directly to an LLM without requiring a student to engage with the prompt construction strategy; or (b) add a further layer of complexity to the inductive reasoning required to deter- mine the problem that is being illustrated by the visualization. For example, Figure 5 is intended to convey that a program should accept 5 numbers and remove the highest and lowest values before calculating the average of the central 3 values. However, a textual description of the image may focus undue attention on the many details that provide context, but which are not directly related to the problem. 6.2.3 Natural language bias. Students for whom English is their native language may, in general, be able to produce prompts in Eng- lish that are more nuanced in their use of language, and are likely to have greater success in improving partially correct prompts. Stu- dents with more limited English language could be disadvantaged in manipulating the LLM to produce the correct program, even when they understand the problem and the programming solution more effectively than a native English speaker. Instructors who plan to use prompt generation activities as part of formal graded assessment should consider the extent to which English language skills should impact grades in their course. 6.2.4 Prompts and specificity. Creating a prompt that gives a gen- eral description of the problem is reasonably straightforward, but as instructors are aware, being precise and complete when describing the requirements for a problem relies on experience and expertise. Students are typically very familiar with following the specifica- tions of a problem, but are often less familiar with the process of specifying desired functionality with precision. For example, our pilot study (see Section 3) revealed that graduate students were frequently not providing sufficient information in their prompt to the model. Similarly, traditional code writing exercises do not encourage students to think about corner cases, because these are typically provided in the problem description (usually carefully worded by an instructor) or shown in test case output. | 2307.16364#43 | 2307.16364#45 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#45 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | This sug- gests that explicitly training prompt construction, as we propose, may make a valuable contribution to computing education by focus- ing more attention on important dispositions, such as being precise and paying attention to detail. Inappropriate solutions. When solving Prompt Problems, the 6.2.5 LLM might produce code which is too advanced relative to the timing of the course, and we may not wish to show this to learners. This could be both negative and positive â it might show students new approaches they have not seen before, but on the other hand it could be confusing and demotivating as students may feel like they should understand the code when they do not. For example, in our classroom evaluation, although most students commented positively on this aspect, we did see some evidence of students being confused by the outputs: â when the question prompt got harder, the code become harder as well and I wasnâ t able to understand the code that was being generatedâ , and â some of the functions used in the latter exercises were new to me and I would not be able to diagnose any code errors within itâ . One way of handling this issue could be through tool design, by including in the tool filters for certain programming constructs that should be used for given problems (instructors could define these along with the problems). These filters could either be post-filters (i.e. rejecting a model completion and requesting a new one if it includes concepts that are not desired) or pre-filters (i.e. where the prompt is modified to include which constructs are allowed). 6.2.6 Problem difficulty. Prompt creation is a new kind of task that we (as a community) have limited experience with, and we have not typically asked students to complete similar tasks. It may be difficult for instructors to have an intuition for how hard it will be for students to construct prompts for various problems. In addition, further thought is needed about when to introduce such tasks into the curriculum. Novices in a typical CS1 course could potentially solve more complex problems earlier than they would otherwise if they had to generate code from scratch. However, it may be useful for students to have some minimal knowledge of programming in order to be able to diagnose problems in code generated by LLMs. | 2307.16364#44 | 2307.16364#46 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#46 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | 10 # 7 CONCLUSION In this work we present a novel pedagogical approach, known as â Prompt Problemsâ , designed to help students learn how to craft effective prompts for generating code using large language models (LLMs). This is an essential skill in the current era of rapidly advanc- ing AI and automated code generation. Learning effective prompt construction is important as it can help students express detailed specifications, encourage them to think about corner cases and apply computational thinking skills. Indeed, we motivate our work by presenting the findings from a pilot study involving graduate students which revealed struggles in providing sufficient details when writing prompts. We make three primary contributions in this paper. The first is the conceptualization of Prompt Problems as a nascent pedagogical strategy. The second is the design and implementation of a novel tool, Promptly, for delivering Prompt Problems at scale. The third contribution is an empirical evaluation of Promptly in a first-year Python programming course, where we explore student interactions with and perceptions of the tool. Future research should investigate different variations of the approach we have described, including permitting code-editing and dialogue-based interactions, which present both benefits and challenges. It is also essential to explore the right time to introduce students to the concept of prompt-based code generation, and how to integrate these problems in parallel with conventional teaching practices. REFERENCES [1] | 2307.16364#45 | 2307.16364#47 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#47 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Joe Michael Allen, Kelly Downey, Kris Miller, Alex Daniel Edgcomb, and Frank Vahid. 2019. Many Small Programs in CS1: Usage Analysis from Multiple Univer- sities. In 2019 ASEE Annual Conference & Exposition ". ASEE Conferences, Tampa, Florida, 1â 13. https://peer.asee.org/33084. [2] Hannah McLean Babe, Sydney Nguyen, Yangtian Zi, Arjun Guha, Molly Q Feld- man, and Carolyn Jane Anderson. 2023. StudentEval: A Benchmark of Student- Written Prompts for Large Language Models of Code. arXiv:2306.04556 [cs.LG] [3] Brett A. Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, James Prather, and Eddie Antonio Santos. 2023. | 2307.16364#46 | 2307.16364#48 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#48 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Programming Is Hard - Or at Least It Used to Be: Educational Opportunities and Challenges of AI Code Generation. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 500â 506. https://doi.org/10.1145/3545945.3569759 [4] Bruno Pereira Cipriano and Pedro Alves. 2023. | 2307.16364#47 | 2307.16364#49 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#49 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | GPT-3 vs Object Oriented Pro- gramming Assignments: An Experience Report. In Proceedings of the 2023 Con- ference on Innovation and Technology in Computer Science Education V. 1 (Turku, Finland) (ITiCSE 2023). Association for Computing Machinery, New York, NY, USA, 61â 67. https://doi.org/10.1145/3587102.3588814 [5] Paul Denny, Brett A. Becker, Juho Leinonen, and James Prather. 2023. | 2307.16364#48 | 2307.16364#50 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#50 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Chat Overflow: Artificially Intelligent Models for Computing Education - RenAIs- sance or ApocAIypse?. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1 (Turku, Finland) (ITiCSE 2023). Association for Computing Machinery, New York, NY, USA, 3â 4. https: //doi.org/10.1145/3587102.3588773 [6] Paul Denny, Viraj Kumar, and Nasser Giacaman. 2023. | 2307.16364#49 | 2307.16364#51 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#51 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Conversing with Copilot: Exploring Prompt Engineering for Solving CS1 Problems Using Natural Language. In Proceedings of the 54th ACM Technical Symposium on Computer Science Educa- tion V. 1 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machin- ery, New York, NY, USA, 1136â 1142. https://doi.org/10.1145/3545945.3569823 [7] Paul Denny, Andrew Luxton-Reilly, Ewan Tempero, and Jacob Hendrickx. 2011. CodeWrite: Supporting Student-Driven Practice of Java. In Proceedings of the 42nd ACM Technical Symposium on Computer Science Education (Dallas, TX, USA) (SIGCSE â 11). Association for Computing Machinery, New York, NY, USA, 471â 476. https://doi.org/10.1145/1953163.1953299 [8] Paul Denny, James Prather, Brett A. Becker, Zachary Albrecht, Dastyni Loksa, and Raymond Pettit. 2019. | 2307.16364#50 | 2307.16364#52 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#52 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | A Closer Look at Metacognitive Scaffolding: Solving Test Cases Before Programming. In Proceedings of the 19th Koli Calling International Conference on Computing Education Research (Koli, Finland) (Koli Calling â 19). Association for Computing Machinery, New York, NY, USA, Article 11, 10 pages. https://doi.org/10.1145/3364510.3366170 [9] Paul Denny, James Prather, Brett A. Becker, James Finnie-Ansley, Arto Hellas, Juho Leinonen, Andrew Luxton-Reilly, Brent N. Reeves, Eddie Antonio San- tos, and Sami Sarsa. 2023. | 2307.16364#51 | 2307.16364#53 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#53 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Computing Education in the Era of Generative AI. arXiv:2306.02608 [cs.CY] [10] Yuemeng Du, Andrew Luxton-Reilly, and Paul Denny. 2020. A Review of Re- search on Parsons Problems. In Proceedings of the Twenty-Second Australasian Computing Education Conference (Melbourne, VIC, Australia) (ACEâ 20). As- sociation for Computing Machinery, New York, NY, USA, 195â 202. https: //doi.org/10.1145/3373165.3373187 [11] Barbara J. Ericson, Paul Denny, James Prather, Rodrigo Duran, Arto Hellas, Juho Leinonen, Craig S. Miller, Briana B. Morrison, Janice L. Pearce, and Susan H. Rodger. 2022. Parsons Problems and Beyond: Systematic Literature Review and Empirical Study Designs. In Proceedings of the 2022 Working Group Reports on Innovation and Technology in Computer Science Education (Dublin, Ireland) (ITiCSE-WGR â | 2307.16364#52 | 2307.16364#54 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#54 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | 22). Association for Computing Machinery, New York, NY, USA, 191â 234. https://doi.org/10.1145/3571785.3574127 James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly, and James Prather. 2022. The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming. In Proceedings of the 24th Aus- tralasian Computing Education Conference (Virtual Event, Australia) (ACE â | 2307.16364#53 | 2307.16364#55 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#55 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | 22). Association for Computing Machinery, New York, NY, USA, 10â 19. https: //doi.org/10.1145/3511861.3511863 James Finnie-Ansley, Paul Denny, Andrew Luxton-Reilly, Eddie Antonio Santos, James Prather, and Brett A. Becker. 2023. My AI Wants to Know If This Will Be on the Exam: Testing OpenAIâ s Codex on CS2 Programming Exercises. In Proceedings of the 25th Australasian Computing Education Conference (Melbourne, VIC, Australia) (ACE â 23). Association for Computing Machinery, New York, NY, USA, 97â 104. https://doi.org/10.1145/3576123.3576134 [14] Majeed Kazemitabaar, Justin Chow, Carl Ka To Ma, Barbara J. Ericson, David Weintrop, and Tovi Grossman. 2023. | 2307.16364#54 | 2307.16364#56 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#56 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Studying the Effect of AI Code Generators on Supporting Novice Learners in Introductory Programming. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI â 23). Association for Computing Machinery, New York, NY, USA, Article 455, 23 pages. https://doi.org/10.1145/3544548.3580919 [15] Hieke Keuning, Johan Jeuring, and Bastiaan Heeren. 2018. A Systematic Liter- ature Review of Automated Feedback Generation for Programming Exercises. ACM Transactions on Computing Education (TOCE) 19, 1 (2018), 1â | 2307.16364#55 | 2307.16364#57 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#57 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | 43. [16] Sam Lau and Philip J Guo. 2023. From â Ban It Till We Understand Itâ to â Resistance is Futileâ â : How University Programming Instructors Plan to Adapt as More Students Use AI Code Generation and Explanation Tools such as ChatGPT and GitHub Copilot. ACM ICER 2023 to appear. https://pg.ucsd.edu/publications/cs- instructors-adapting-to-chatgpt-copilot-ai-tools_ICER-2023.pdf Juho Leinonen, Paul Denny, Stephen MacNeil, Sami Sarsa, Seth Bernstein, Joanne Kim, Andrew Tran, and Arto Hellas. 2023. Comparing Code Explanations Created by Students and Large Language Models. arXiv:2304.03938 [cs.CY] Juho Leinonen, Arto Hellas, Sami Sarsa, Brent Reeves, Paul Denny, James Prather, and Brett A. Becker. 2023. Using Large Language Models to Enhance Programming Error Messages. In Proceedings of the 54th ACM Technical Sym- posium on Computer Science Education V. 1 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 563â 569. https://doi.org/10.1145/3545945.3569770 [19] Richard Lobb and Jenny Harlow. 2016. Coderunner: A Tool for Assessing Computer Programming Skills. ACM Inroads 7, 1 (feb 2016), 47â 51. https: //doi.org/10.1145/2810041 [20] Andrew Luxton-Reilly, Paul Denny, Diana Kirk, Ewan Tempero, and Se-Young Yu. 2013. On the Differences between Correct Student Solutions. In Proceedings of the 18th ACM Conference on Innovation and Technology in Computer Science Education (Canterbury, England, UK) (ITiCSE â 13). Association for Computing Machinery, New York, NY, USA, 177â 182. https://doi.org/10.1145/2462476.2462505 [21] Stephen MacNeil, Joanne Kim, Juho Leinonen, Paul Denny, Seth Bernstein, Brett A. | 2307.16364#56 | 2307.16364#58 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#58 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Becker, Michel Wermelinger, Arto Hellas, Andrew Tran, Sami Sarsa, James Prather, and Viraj Kumar. 2023. The Implications of Large Language Models for CS Teachers and Students. In Proceedings of the 54th ACM Tech- nical Symposium on Computer Science Education V. 2 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 1255. https://doi.org/10.1145/3545947.3573358 [22] Stephen MacNeil, Andrew Tran, Arto Hellas, Joanne Kim, Sami Sarsa, Paul Denny, Seth Bernstein, and Juho Leinonen. 2023. Experiences from Using Code Explanations Generated by Large Language Models in a Web Software Development E-Book. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (Toronto ON, Canada) (SIGCSE 2023). As- sociation for Computing Machinery, New York, NY, USA, 931â 937. https: //doi.org/10.1145/3545945.3569785 | 2307.16364#57 | 2307.16364#59 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#59 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | [23] Kamil Malinka, Martin Peresà ni, Anton Firc, Ondrej Hujnák, and Filip Janus. 2023. On the Educational Impact of ChatGPT: Is Artificial Intelligence Ready to Obtain a University Degree?. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1 (Turku, Finland) (ITiCSE 2023). Association for Computing Machinery, New York, NY, USA, 47â 53. https: 11 //doi.org/10.1145/3587102.3588827 [24] Steven Moore, Richard Tong, Anjali Singh, Zitao Liu, Xiangen Hu, Yu Lu, Joleen Liang, Chen Cao, Hassan Khosravi, Paul Denny, Chris Brooks, and John Stamper. 2023. Empowering Education with LLMs-The Next-Gen Interface and Content Generation. In International Conference on Artificial Intelligence in Education. | 2307.16364#58 | 2307.16364#60 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#60 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Springer, 32â 37. https://doi.org/10.1007/978-3-031-36336-8_4 [25] Yulia Pechorina, Keith Anderson, and Paul Denny. 2023. Metacodenition: Scaf- folding the Problem-Solving Process for Novice Programmers. In Proceedings of the 25th Australasian Computing Education Conference (Melbourne, VIC, Aus- tralia) (ACE â 23). Association for Computing Machinery, New York, NY, USA, 59â 68. https://doi.org/10.1145/3576123.3576130 | 2307.16364#59 | 2307.16364#61 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#61 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | [26] Leo Porter and Daniel Zingaro. 2023. Learn AI-Assisted Python Programming: With Github Copilot and ChatGPT. Manning, Shelter Island, NY. James Prather, Brent N. Reeves, Paul Denny, Brett A. Becker, Juho Leinonen, Andrew Luxton-Reilly, Garrett Powell, James Finnie-Ansley, and Eddie Antonio Santos. 2023. â Itâ s Weird That it Knows What I Wantâ â : | 2307.16364#60 | 2307.16364#62 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#62 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Usability and Interactions with Copilot for Novice Programmers. arXiv:2304.02491 [cs.HC] 27 [28] Brent Reeves, Sami Sarsa, James Prather, Paul Denny, Brett A. Becker, Arto Hellas, Bailey Kimmel, Garrett Powell, and Juho Leinonen. 2023. Evaluating the Performance of Code Generation Models for Solving Parsons Problems With Small Prompt Variations. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1 (Turku, Finland) (ITiCSE 2023). Association for Computing Machinery, New York, NY, USA, 299â 305. https://doi.org/10.1145/3587102.3588805 | 2307.16364#61 | 2307.16364#63 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#63 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | [29] Sami Sarsa, Paul Denny, Arto Hellas, and Juho Leinonen. 2022. Automatic Generation of Programming Exercises and Code Explanations Using Large Lan- guage Models. In Proceedings of the 2022 ACM Conference on International Com- puting Education Research - Volume 1 (Lugano and Virtual Event, Switzerland) (ICER â 22). Association for Computing Machinery, New York, NY, USA, 27â 43. https://doi.org/10.1145/3501385.3543957 [30] Leonard Tang, Elizabeth Ke, Nikhil Singh, Bo Feng, Derek Austin, Nakul Verma, and Iddo Drori. 2022. Solving Probability And Statistics Problems By Probabilis- tic Program Synthesis At Human Level And Predicting Solvability. In Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tuto- rials, Industry and Innovation Tracks, Practitionersâ and Doctoral Consortium: 23rd International Conference, AIED 2022, Durham, UK, July 27â 31, 2022, Proceedings, Part II (Durham, United Kingdom). Springer-Verlag, Berlin, Heidelberg, 612â 615. https://doi.org/10.1007/978-3-031-11647-6_127 [31] Matti Tedre and Henriikka Vartiainen. 2023. | 2307.16364#62 | 2307.16364#64 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#64 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | K-12 Computing Education for the AI Era: From Data Literacy to Data Agency. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1 (Turku, Finland) (ITiCSE 2023). Association for Computing Machinery, New York, NY, USA, 1â 2. https://doi.org/10.1145/3587102.3593796 [32] Michel Wermelinger. 2023. | 2307.16364#63 | 2307.16364#65 | 2307.16364 | [
"2306.04556"
]
|
2307.16364#65 | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators | Using GitHub Copilot to Solve Simple Programming Problems. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 172â 178. https://doi.org/10.1145/ 3545945.3569830 Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse Spencer-Smith, and Douglas C. Schmidt. 2023. A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. arXiv:2302.11382 [cs.SE] | 2307.16364#64 | 2307.16364 | [
"2306.04556"
]
|
|
2307.16125#0 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | 3 2 0 2 g u A 2 ] L C . s c [ 2 v 5 2 1 6 1 . 7 0 3 2 : v i X r a # SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension Bohao Li1â Rui Wang1â Guangzhi Wang2â Yuying Ge1â Yixiao Ge1,2â Ying Shan1,2 1Tencent AI Lab 2ARC Lab, Tencent PCG https://github.com/AILab-CVC/SEED-Bench # Abstract Based on powerful Large Language Models (LLMs), recent generative Multi- modal Large Language Models (MLLMs) have gained prominence as a pivotal research area, exhibiting remarkable capability for both comprehension and gen- eration. In this work, we address the evaluation of generative comprehension in MLLMs as a preliminary step towards a comprehensive assessment of generative models, by introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple choice questions with accurate human annotations (Ã 6 larger than existing benchmarks), which spans 12 evaluation dimensions including the comprehension of both the image and video modality. We develop an advanced pipeline for generating multiple-choice questions that target specific evaluation dimensions, integrating both automatic filtering and manual verification processes. Multiple-choice questions with groundtruth options derived from human annotation enables an objective and efficient assessment of model performance, eliminating the need for human or GPT intervention during evaluation. We further evaluate the performance of 18 models across all 12 dimensions, covering both the spatial and temporal understanding. By revealing the limitations of existing MLLMs through evaluation results, we aim for SEED-Bench to provide insights for motivating future research. We will launch and consistently maintain a leaderboard to provide a platform for the community to assess and investigate model capability. # Introduction In recent years, Large Language Models (LLMs) [1, 2, 3, 4, 5] have exhibited remarkable capabilities to understand, reason, and generate texts across a variety of open-ended tasks. | 2307.16125#1 | 2307.16125 | [
"2306.05424"
]
|
|
2307.16125#1 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Leveraging the strong generality of LLMs, generative Multimodal Large Language Models (MLLMs) [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21] have demonstrate enhanced abilities for multimodal comprehension and generation. However, current MLLMs mainly evaluate their performance with a limited number of qualitative examples, or by employing previous benchmarks that are not tailored for evaluating MLLMs with open-form output. For example, in VQAv2 [22], an answer is considered correct only if the modelâ s output exactly matches the groundtruth answer, which typically consists of just one or two words. The lack of a comprehensive and objective benchmark to evaluate MLLMs poses a significant challenge for comparing and investigating the performance of various models. Concurrent works [23, 24, 25, 26] have made efforts to develop benchmarks for specifically evaluating MLLMs as shown in Table 1. For example, LVLM-eHub [25] and LAMM [24] utilize exiting public datasets across various computer vision tasks as evaluation samples, and employ human annotators or GPT to assess the quality, relevance, and usefulness of modelâ s predictions. | 2307.16125#0 | 2307.16125#2 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#2 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | However, the involvement Equal Contribution. â Correspondence to [email protected] and [email protected]. # Action ot |. Scene Understanding as r,t sts © oo® oe @" vs * es > .o x â iy, oe von Predi oo Action Prediction â ¢ Instance Attribute fe / 6 VideoChat 37.63 j se? \ 7 mPLUG-Owlal 34.01 Recognition | Instance Location 8 Otter 33.91 I SEED-Bench 9 LLaVaial 33.52 eh | 10 evn 33.48 \ 1 MultiModal-GPT 33.15 \ 12 OpenFlamingow 33.14 roo 13 LLaMA-Adapter V2. 32.73 (OCR, 14 Video-ChatGPT= 31.17 Text Recognition > Instance C) 15 Valley 3032 en 16 Vicuna 7 28.50 Meg s PP 7 Flant5)2 27.65 Sp oo 18 LlaMA 26.75 e Instance Interaction 2:UM :imagelim E:VideoLLM 6 VideoChat 37.63 7 mPLUG-Owlal 34.01 8 Otter 33.91 9 LLaVaial 33.52 10 evn 33.48 1 MultiModal-GPT 33.15 12 OpenFlamingow 33.14 13 LLaMA-Adapter V2. 32.73 14 Video-ChatGPT= 31.17 15 Valley 3032 16 Vicuna 7 28.50 7 Flant5)2 27.65 18 LlaMA 26.75 2:UM :imagelim E:VideoLLM Figure 1: Left: | 2307.16125#1 | 2307.16125#3 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#3 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Overview of 12 evaluation dimensions in SEED-Bench including both the spatial and temporal understanding, where the number in the bar denotes the number of human-annotated multiple-choice questions in each dimension. Right: the overall leaderboard displaying the averaged accuracy of 18 models across 12 evaluation dimensions. of human and GPT during evaluation not only compromises efficiency, but also leads to increased subjectivity and reduced accuracy of the assessment. MME [23] and MMBench [26] further advance objective evaluation of MLLMs by constructing True/False Questions or Multiple-Choice Questions, which cover a variety of ability dimensions. Restricting the modelâ s output to True/False or A/B/C/D options facilitates the convenient computation of accuracy, which serves as an objective metric for evaluation. However, the relatively small scale of these benchmarks (fewer than 3K samples) introduces instability in the evaluation statistics. In this work, we focus on evaluating the generative comprehension capability of MLLMs as a preliminary step towards a comprehensive assessment of generative models, by introducing a bench- mark named SEED-Bench*. SEED-Bench spans 12 evaluation dimensions across both image and video modalities as shown in Fig. 1. SEED-Bench consists of 19K multiple choice questions with groundtruth answers derived from human annotation (à 9 larger than MME and à 6 larger than MM- Bench) as shown in Fig. 2. | 2307.16125#2 | 2307.16125#4 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#4 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | We design a sophisticated pipeline for the generation of multiple-choice questions that are tailored to evaluate specific dimensions. We further incorporate automated filtering mechanism and manual verification process to ensure the quality of questions and the accuracy of groundtruth answers. Specifically, for images, we utilize various foundation models to extract their visual information in- cluding image-level captions [6, 27], instance-level descriptions [28, 29, 30] and textual elements [31]. For videos, we leverage the original human annotations to provide visual information. We then feed the visual information to ChatGPT/GPT-4 with specially designed prompts corresponding to specific evaluation dimension. ChatGPT/GPT-4 subsequently generates questions as well as four candidate options with one groundtruth answer. We further filter out questions that can be answered without the visual input through utilizing multiple LLMs. Finally, we employ human annotators to choose the correct option of each multiple-choice question and classify each question into one evaluation dimension, resulting in a clean and high-quality benchmark containing 19K multiple-choice questions. *In pursuit of Artificial General Intelligence (AGI), LLMs have witnessed substantial progress. We have made a bold assumption that the premise for the emergence of multimodal capabilities is to unify both comprehension and generation within an autoregressive generative model, where SEED [18] takes a modest step. Besides the exploration of models, it is essential to have appropriate evaluations that motivate research directions. Therefore, we concurrently propose SEED-Bench to evaluate the comprehension ability of generative models. | 2307.16125#3 | 2307.16125#5 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#5 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | 2 scene Understanding instance ent instance tribute â What isthe weather ike inthe image? â What kind of animal is visible inthe image? â What is the material ofthe table? Ata sunny da. A.torse AMarble Aste 8.cow 8. Wood C.tsraining heavy Sheep cass Data oud dy. Goat o-Ps instance Location instance counting EF Spatial Relation ba Whereis th dog located nthe ng oom? How many people area the event? Whereis the tre in elation tothe house? fe BI) onthe fireplace AL Ale front ofthe house baad 8.onthe table < <i 6. Behind the house Â¥ onthe chair â < ia C inside the house D.ontherug 33 O-Leto the house Text Recognition â oc Instance interaction FX} (What's the relation between a player and areferee?| â What can we infer about the situation? 8 whats the main warning onthe sign? © A. Danot enter [A The players shaking hands witha referee y { [A-They are admiring the engine 8. They are experiencing car trouble 8, Dead end road i BC. The player is receiving an award from a referee JS they are having 2 picnic . Beware of bears ©. The player is shown a card by a referee D. They ae washing the ear 0. Trail closed \What is the action being carried outin the video? [A Throwing something in the ar and letting it fll 8. Throwing something in the ir and catching it Citing up one end of something, then letting it drop down . Poking something so that falls over \What action do you anticipate following the end ofthis video? A Stir potatoes 8. Wash potatoes Add potatoes Slice potatoes Can you recognize the actions that occur in this vdeo and list them in order? |A.Cook breakfast, switch stove on, close fridge, carry milk, peel banana 8B. Scoop ice cream, squeeze chocolate syrup, pour sprinkles close fridge C.Clse fridge, carry milk, screw open milk cap, pour mil, screw close mik cap 1. Reach for cereal box, grab bow, pour mil, stir cereal, close fridge | 2307.16125#4 | 2307.16125#6 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#6 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Figure 2: Data samples of SEED-Bench, which covers 12 evaluation dimensions including both the spatial and temporal understanding. Each evaluation dimension contains multiple-choice questions with groundtruth options derived from human annotation. Table 1: Comparisons between existing benchmarks for Multimodal LLMs. â H/G Evaluationâ denotes whether human or GPT is used for evaluation. Benchmark Visual Modality Customized Question â â â â â #Answer Annotation Answer Type H/G Evaluation #Models MME [23] LAMM [24] LVLM-eHub [25] MMBench [26] Ours Image Image & Point cloud Image Image Image & Video 2194 - - 2974 19242 Y/N free-form free-form free-form A/B/C/D N/A GPT Human GPT N/A 10 4 8 14 18 Our pipeline supports the scalability of evaluation data across multiple domains, and we will continue to expand the benchmark with more evaluation dimensions. Based on SEED-Bench, we comprehensively evaluate 18 models including LLMs, ImageLLMs and VideoLLMs across all 12 dimensions as shown in Fig. 1. Different from MMBench [26] that employs ChatGPT to match a modelâ s prediction to one of the choices in a multiple-choice question (achieves only 87.0% alignment rate), we follow GPT-3 [32] to calculate log-likelihood for each candidate option and select the one with the highest value as the final prediction, without relying on the instruction-following capabilities of models to output â | 2307.16125#5 | 2307.16125#7 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#7 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Aâ or â Bâ or â Câ or â Dâ . By analyzing the results across 12 dimensions, we conduct a comprehensive comparison of existing multimodal models in both spatial and temporal understanding capabilities. We observe that the majority of MLLMs still exhibit limited performance across all 12 evaluation dimensions, and surprisingly find that VideoLLMs fail to achieve competitive performance on temporal understanding compared with ImageLLMs. Through the evaluation results, we aim for SEED-Bench to provide insights for motivating future exploration of a more advanced MLLM. We will launch an evaluation platform and consistently maintain a leaderboard for assessing and comparing model performance. | 2307.16125#6 | 2307.16125#8 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#8 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | 3 # 2 Related Work Multimodal Large Language Models. With the impressive success of Large language models (LLM) [1, 5, 4], recent studies work on generative Multimodal Large Language Models (MLLMs) [6, 7, 8, 9, 10, 11, 12, 13, 14, 18, 19, 20, 21] to improve multimodal comprehension and generation through utilizing the strong generality of LLMs. Some work [15, 16, 17] further considers video inputs and leverage the vast capabilities of LLMs for video understanding tasks. In SEED-Bench, we provide a comprehensive quantitative evaluations of these models to thoroughly assess and compare their performance in generative comprehension. Benchmarks for Multimodal Large Language Models. With the rapid development of Multi- modal Large Language Models (MLLMs), some concurrent works [23, 24, 25, 26] propose various benchmarks for evaluating MLLMs. For example, GVT [33] constructs a benchmark by aggregating two semantic-level understanding tasks (VQA and Image Captioning) and two fine-grained tasks (Object Counting and Multi-class Identification). But its evaluation is constrained to limited aspects of visual understanding. LVLM-eHub [25] combines multiple existing computer vision benchmarks and develops an online platform, where two models are prompted to answer a question related to an image and human annotators are employed to compare the predictions of models. The involvement of human annotators during evaluation not only introduces bias but also incurs significant costs. LAMM [24] evaluates image and point cloud tasks by using entity extraction to obtain key answers from open-form predictions and utilizing GPT to evaluate the answersâ relevance and accuracy to the groundtruth. The reliance on entity extraction and GPT metric can impact the accuracy and reliability of the evaluation. MME [23] and MMBench [26] aim to enhance the objective evaluation of MLLMs by constructing 2914 True/False Questions and 2974 Multiple Choice Questions across a variety of ability dimensions respectively. Considering the relatively small scale of these benchmarks, their evaluation results may exhibit instability. In this work, we introduce SEED-Bench to provide objective and comprehension evaluation of MLLMs, which contains 19K multiple-choice questions covering 12 evaluation dimensions including both spatial and temporal understanding. | 2307.16125#7 | 2307.16125#9 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#9 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | # 3 SEED-Bench Our benchmark contains 19K multiple-choice questions with accurate human annotations spanning 12 evaluation dimensions including both the spatial and temporal understanding. In this section, we first present the evaluation dimensions of SEED-Bench in Sec. 3.1. We introduce the data source in Sec. 3.2 and our pipeline for constructing multiple-choice questions in Sec. 3.3. We finally describe the evaluation strategy for MLLMs to answer multiple-choice questions in Sec. 3.4. # 3.1 Evaluation Dimensions In order to comprehensively assess the visual understanding capability of MLLMs, SEED-Bench incorporates 12 evaluation dimensions including both the spatial and temporal comprehension as shown in Table 2. Spatial Understanding. For the evaluation of spatial comprehension, we consider 9 dimensions covering image-level and instance-level perception and reasoning. | 2307.16125#8 | 2307.16125#10 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#10 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | â ¢ Scene Understanding. This dimension focuses on the global information in the image. Questions can be answered through a holistic understanding of the image. â ¢ Instance Identity. This dimension involves the identification of a certain instance in the image, including the existence or category of a certain object in the image. It evaluates a modelâ s object recognition capability. â ¢ Instance Attributes. This dimension is related to the attributes of an instance, such as color, shape or material. It assesses a modelâ s understanding of an objectâ s visual appearance. â ¢ Instance Location. This dimension concerns the absolute position of one specified instance. It requires a model to correctly localize the object referred to in the question. | 2307.16125#9 | 2307.16125#11 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#11 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | â ¢ Instances Counting. This dimension requires the model to count the number of a specific object in the image. This requires the model to understand all objects, and successfully count the referred objectâ s instances. 4 Table 2: Evaluation dimensions of SEED-Bench including both the spatial and temporal understand- ing. We omit the image in the sample questions. Evaluation Dimensions Sample Questions 1. Scene Understanding What is the weather like in the image? A. Itâ s a sunny day B. Itâ s foggy C. Itâ s raining heavily D. Itâ s a cloudy day 2. | 2307.16125#10 | 2307.16125#12 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#12 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Instance Identity What kind of animal is visible in the image? A. Horse B. Cow C. Sheep D. Goat 3. Instance Attribute What is the material of the table? A. Marble B. Wood C. Glass D. Plastic 4. Instance Location Where is the dog located in the living room? A. On the fireplace B. On the table C. On the chair D. On the rug 5. Instance Counting How many people are there in the image? A. 1 B. 2 C. 4 D. 3 6. Spatial Relation What is the tree in relateion to the house? A. In front of the house B. Behind the house C. Inside the house D. Left to the house 7. | 2307.16125#11 | 2307.16125#13 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#13 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Instance Interaction What is the relation between a player and a referee? A. The player is shaking hands with a referee B. The player is arguing with a referee C. The player is receiving an award from a referee D. The player is shown a card by a referee 8. Visual Reasoning what can we infer about the situation? A. They are admiring the engine B. They are experiencing car trouble C. They are having a picnic D. They are washing the car 9. | 2307.16125#12 | 2307.16125#14 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#14 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Text Recognition What is the main warning on the sign? A. Do not enter B. Dead end road C. Beware of bears D. Trail closed 10. Action Recognition What is the action being carried out in the video? A. Throwing something in the air and letting it fall B. Throwing something in the air and catching it C. Lifting up one end of something, then letting it drop down D. Poking something so that it falls over 11. Action Prediction What action do you anticipate following the end of this video? | 2307.16125#13 | 2307.16125#15 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#15 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | A. Stir potatoes B. Wash potatoes C. Add potatoes D. Slice potatoes 12. Procedure Understanding Can you recognize the actions in this video and list them in order? A. Cook breakfast, switch stove on, close fridge, carry milk, peel banana B. Scoop ice cream, squeeze chocolate syrup, pour sprinkles, close fridge C. Close fridge, carry milk, screw open milk cap, pour milk, screw close milk cap D. Reach for cereal box, grab bowl, pour milk, stir cereal, close fridge | 2307.16125#14 | 2307.16125#16 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#16 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | # Spatial Understanding # Temporal Understanding â ¢ Spatial Relation. This dimension asks an model to ground the two mentioned objects, and recognize their relative spatial relation within the image. â ¢ Instance Interaction. This dimension requires the model to recognize the state relation or interaction relations between two humans or objects. â ¢ Visual Reasoning. This dimension evaluates if a model is able to reason based on the visual information. This requires the model to fully understand the image and utilize its commonsense knowledge to correctly answer the questions. | 2307.16125#15 | 2307.16125#17 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#17 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | â ¢ Text Understanding. For this dimension, the model should answer question about the textual elements in the image. Temporal Understanding. For the evaluation of temporal comprehension, we consider 3 dimensions focusing on the recognition, prediction and procedure understanding of actions. Action Recognition. In this dimension, the model is required to recognize the action shown in the videos. Not only the ability of capture temporal dynamics, but also the knowledge of physical motions, human actions and dynamic interaction between objects is evaluated. â ¢ Action Prediction. The target of this dimension is to predict the future action through the preceding video segment, which requires the understanding of contextual information from videos and temporal reasoning. | 2307.16125#16 | 2307.16125#18 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#18 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | â ¢ Procedure Understanding. This dimension requires the model to capture all the key actions and perform temporal ordering on them. We aims to evaluate the ability of temporally fine-grained understanding and procedure reasoning. 5 (a) Question/Answer Generation | image Captioning A person holding a board standing on a street (BLIP2 & Tag2text) A person is holding a white board and another person... Dense Captioning A person holding a white board (0.4, 0.05, 0.65, 1.0] (GRiT) Awhite board with texts on it [0.2, 0.4, 0.7, 0.95] Object Detection Person (0.1, 0.5, 0.15, 0.5] | (SAM) > Person (0.1, 0.1, 0.15, 0.5... Attribute Detection Person (0.1, 0.1, 0.15, 0.5] old, standing | (Vinvt) > Street (0.0, 0.1, 0.15, 1.0] grey, empty ... Image From CC3M Text Detection "Tax the rich" (0.25, 0.5, 0.62, 0.5] (PaddleOcR) "20 Brackets-$20 Millionâ (0.18, 0.85, 0.75, 0.84] ... Prompts for Question Generation J Based on the above information, create several Visual Information multiple-choice questions. Each question should What is the main topic of the sign held by the man in the image? have four choices with one correct answer ... A. Environmentalism B. Anti-government C. Taxation D. Education Answer: C Prompts for each evaluation dimension Â¥t, ChatGPT/GPT-4 (b) Question/Answer Verification â | 2307.16125#17 | 2307.16125#19 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#19 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | What is the main topic of the sign held by the man in the image? B Lâ ] Ly A. Environmentalism B. Anti-government â â â . ° â _r i C. Taxation D. Education Answer: C rT) Questions and answers generated in Step (a) Automatic Filtering Human Annotation SEED-Bench Figure 3: Overview of SEED-Bench pipeline for generating multiple-choice questions of images. (a) We first leverage various foundation models to extract visual information including image-level captions, instance-level descriptions and textual elements. Based on specially designed prompts corresponding to specific evaluation dimension, ChatGPT/GPT-4 subsequently generates questions and four candidate options with one groundtruth answer. (b) We further filter out questions by utilizing LLMs and employ human annotators to select the correct option and classify each question into one evaluation dimension. # 3.2 Data Source To create a benchmark with various evaluation dimensions, we need to collect data containing images with abundant visual information and videos with rich temporal dynamics, so that we can construct diverse challenging multiple-choice questions. In SEED-Bench, we use CC3M [34] dataset with filtered samples to build questions for spatial understanding. Specifically, considering the noisy original captions of CC3M, we generate captions for each image with Tag2Text [27]. We filter out those images with no more than 5 nouns in their captions, so as to ensure the information richness in the remaining images for constructing questions. We further adopt Something-Something-v2 (SSV2) [35], Epic-kitchen 100 [36] and Breakfast [37] dataset to build questions for temporal understanding. SSV2 is an action recognition dataset including 174 fine-grained categories of basic actions with everyday objects and we adopt 1740 videos from its validation set. We also select 138 long videos from Epic-kitchen 100 dataset with temporally annotated action labels. Moreover, videos and fine-grained action segmentation annotations in Breakfast dataset [37] are utilized for the procedure understanding task. | 2307.16125#18 | 2307.16125#20 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#20 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | # 3.3 Multiple-Choice Questions As shown in Fig. 3, our pipeline for generating multiple-choice questions involves question/answer generation and verification. For generating question/answer pairs, we first leverage various foundation models to extract visual information including image-level captions, instance-level descriptions and textual elements. Based on specially designed prompts corresponding to specific evaluation dimension, ChatGPT/GPT-4 subsequently generates questions and four candidate options with one groundtruth answer. For verifying question/answer pairs, we filter out questions that can be answered correctly by multiple LLMs without resorting to visual information. We further employ human annotators to select the correct option and classify each question into one evaluation dimension. | 2307.16125#19 | 2307.16125#21 | 2307.16125 | [
"2306.05424"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.