id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2308.00352#5 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | â ¢ We introduce MetaGPT, a meta-programming framework for multi-agent collaboration based on LLMs. It is highly convenient and flexible, with well-defined functions like role definition and message sharing, making it a useful platform for developing LLM-based multi-agent systems. # 1https://en.wikipedia.org/w/index.php?title=Metaprogramming 2 Preprint â ¢ Our innovative integration of human-like SOPs throughout MetaGPTâ s design significantly en- hances its robustness, reducing unproductive collaboration among LLM-based agents. Furthermore, we introduce a novel executive feedback mechanism that debugs and executes code during runtime, significantly elevating code generation quality (e.g., 5.4% absolute improvement on MBPP). | 2308.00352#4 | 2308.00352#6 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#6 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | â ¢ We achieve state-of-the-art performance on HumanEval (Chen et al., 2021a) and MBPP (Austin et al., 2021). Extensive results convincingly validate MetaGPT, suggesting that it is a promising meta-programming framework for developing LLM-based multi-agent systems. 2 RELATED WORK Automatic Programming The roots of automatic programming reach back deep into the previ- ous century. In 1969, Waldinger & Lee (1969) introduced â PROW,â a system designed to accept program specifications written in predicate calculus, generate algorithms, and create LISP imple- mentations (McCarthy, 1978). Balzer (1985) and Soloway (1986) made efforts to advance auto- matic programming and identified potential methods to achieve it. Recent approaches use natural language processing (NLP) techniques (Ni et al., 2023; Skreta et al., 2023; Feng et al., 2020; Li et al., 2022; Chen et al., 2018; 2021b; Zhang et al., 2023). Automatic programming has grown into an industry delivering paid functions such as Microsoft Copilot. Lately, LLMs-based agents (Yao et al., 2022; Shinn et al., 2023; Lin et al., 2023) have advanced automatic programming develop- ment. Among them, ReAct (Yao et al., 2022) and Reflexion (Shinn et al., 2023) utilize a chain of thought prompts (Wei et al., 2022) to generate reasoning trajectories and action plans with LLMs. Both works demonstrate the effectiveness of the ReAct style loop of reasoning as a design paradigm for empowering automatic programming. Additionally, ToolFormer (Schick et al., 2023) can learn how to use external tools through simple APIs. The research most closely aligned with our work by Li et al. (2023) proposes a straightforward role-play framework for programming that involves communication between agents playing different roles. Qian et al. (2023) utilizes multiple agents for software development. Although existing papers (Li et al., 2023; Qian et al., 2023) have improved productivity, they have not fully tapped into effective workflows with structured output formats. This makes it harder to deal with complex software engineering issues. | 2308.00352#5 | 2308.00352#7 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#7 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | LLM-Based Multi-Agent Frameworks Recently, LLM-based autonomous agents have gained tremendous interest in both industry and academia (Wang et al., 2023b). Many works (Wang et al., 2023c; Du et al., 2023; Zhuge et al., 2023; Hao et al., 2023; Akata et al., 2023) have improved the problem-solving abilities of LLMs by integrating discussions among mul- tiple agents. Stable-Alignment (Liu et al., 2023) creates instruction datasets by deriving consensus on value judgments through interactions across a sandbox with LLM agents. Other works focus on sociological phenomena. For example, Generative Agents (Park et al., 2023) creates a â | 2308.00352#6 | 2308.00352#8 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#8 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | townâ of 25 agents to study language interaction, social understanding, and collective memory. In the Natural Language-Based Society of Mind (NLSOM) (Zhuge et al., 2023), agents with different functions interact to solve complex tasks through multiple rounds of â mindstorms.â Cai et al. (2023) propose a model for cost reduction by combining large models as tool makers and small models as tool users. Some works emphasize cooperation and competition related to planning and strategy (Bakhtin et al., In our implementations, we 2022); others propose LLM-based economies (Zhuge et al., 2023). observe several challenges to multi-agent cooperation, such as maintaining consistency and avoiding unproductive cycles. This motivates our focus on applying advanced concepts such as Standard Operating Procedures in software development to multi-agent frameworks. | 2308.00352#7 | 2308.00352#9 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#9 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | # 3 METAGPT: A META-PROGRAMMING FRAMEWORK MetaGPT is a meta-programming framework for LLM-based multi-agent systems. Sec. 3.1 pro- vides an explanation of role specialization, workflow and structured communication in this frame- work, and illustrates how to organize a multi-agent system within the context of SOPs. Sec. 3.2 presents a communication protocol that enhances role communication efficiency. We also imple- ment structured communication interfaces and an effective publish-subscribe mechanism. These methods enable agents to obtain directional information from other roles and public information | 2308.00352#8 | 2308.00352#10 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#10 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | 3 Preprint Name Alex Agent Profile Profile Engineer Goal Write elegant, readable.extensible, efficient code @ 5 wy 8 . Constraint The code you write should conform to code standard like PEP8, be modular,easy to read and maint Architect Project Manager L en a ee @ diagram tool @ diagram tool Me = @& eC Memory Retrieval msgA . . (subscription| Engineer Iterative Programming content: {Architect: Implementation appro...) - â + i Code: game.py msgB |( instruct_conter ata structures and in ..." © lex game.py Z = 090 ee cause_by: WriteTasks import random . . Engineer BPRD Document msgC |\ sent_from: ProjectManager cass i" writes [class Game: BsSystem Design send_to: Engineer game, def init__(self, ae self.size = self.score Feedback self.high_score = 0 . J self.board = {[0l*size for_in | ag code range(size Product Manager QA Engineer self.game_over = False a seff.start) @& web search tool x Debug + â E) ti BA ch search ool ae ran ; ebus. xecution) Executable Feedback fools | web search tool lebugging tool jiagram tool a @ debuge = (J Structured Message(... Figure 2: An example of the communication protocol (left) and iterative programming with exe- cutable feedback (right). Left: Agents use a shared message pool to publish structured messages. They can also subscribe to relevant messages based on their profiles. | 2308.00352#9 | 2308.00352#11 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#11 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Right: After generating the initial code, the Engineer agent runs and checks for errors. If errors occur, the agent checks past messages stored in memory and compares them with the PRD, system design, and code files. from the environment. Finally, we introduce executable feedbackâ a self-correction mechanism for further enhancing code generation quality during run-time in Sec. 3.3. 3.1 AGENTS IN STANDARD OPERATING PROCEDURES Specialization of Roles Unambiguous role specialization enables the breakdown of complex work into smaller and more specific tasks. Solving complex tasks or problems often requires the collab- oration of agents with diverse skills and expertise, each contributing specialized outputs tailored to specific issues. In a software company, a Product Manager typically conducts business-oriented analysis and derives insights, while a software engineer is responsible for programming. We define five roles in our software company: Product Manager, Architect, Project Manager, Engineer, and QA Engineer, as shown in Figure 1. In MetaGPT, we specify the agentâ s profile, which includes their name, profile, goal, and constraints for each role. We also initialize the specific context and skills for each role. For instance, a Product Manager can use web search tools, while an Engineer can execute code, as shown in Figure 2. All agents adhere to the React-style behavior as described in Yao et al. (2022). Every agent monitors the environment (i.e., the message pool in MetaGPT) to spot important ob- servations (e.g.,, messages from other agents). These messages can either directly trigger actions or assist in finishing the job. Workflow across Agents By defining the agentsâ roles and operational skills, we can establish basic workflows. In our work, we follow SOP in software development, which enables all agents to work in a sequential manner. Specifically, as shown in Figure 1, upon obtaining user requirements, the Product Manager under- takes a thorough analysis, formulating a detailed PRD that includes User Stories and Requirement Pool. This serves as a preliminary functional breakdown. The structured PRD is then passed to the Architect, who translates the requirements into system design components, such as File Lists, Data Structures, and Interface Definitions. Once captured in the system design, the information is directed towards the Project Manager for task distribution. Engineers proceed to execute the des- ignated classes and functions as outlined (detailed in Figure 2). | 2308.00352#10 | 2308.00352#12 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#12 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | In the following stage, the QA Engineer formulates test cases to enforce stringent code quality. In the final step, MetaGPT pro- 4 # Preprint Make the 2048 sliding tile [ MetaGPT number puzzle game _ ) rroduct Manager Action: WritePRD (x) Architect Action: WriteDesign . . . atari Think: The boss requires the creation of 2048 sliding title number puzzle game. Think: For this project, we will use Pygame, an open-source library designed for making video games with Python. © Program call flow "Create a web-based 2048 game that is easy to playâ ## Product Goals - a Incorporate features that enhance user engagement and retention HH User Stories "Asa user, I want fo easily understand how to play the game so that Ican start playing immediately", "Asa user, I want the game to be engaging so that I ean play it fora long timeâ . = ## Competitive Analysis 32 Competitive Quadrant Chart [Anything Teach end nganet of canons == oe Unclear? vetoed sae ih sare) it v 2048 by Gabriele Cirulli: The e 2S = a original 2048 game, simple and i ats atabase py", 2048 by Ketchapp: Offers a â â "test_database.py"] Amsthing variety of game modes... - 2048 by Androbaby: Has a clean i interface and high score tracking... i Engineer â Action: WriteCode eeedeeen ean! ? ina] = â om Think: For this project, we will use Pygame, an open-source library â ## Requirement Analysis designed for making video games with Python. The product should be a 2048 sliding tile number puzzle game which is user-friendly. ## Code: game.py #i game.py import random ## Requirement Pool "Develop a simple and intuitive user interface", "PO", â Implement engaging gameplay mechanics", "PO", Think: List out the task list that meets the requirements and design. | d## Shared Knowledge "game pyâ contains the 'Game' class, which includes the game logic, class Gami roject Manager Action: WriteTasks HH Task List [*main.py", "game.py", "gui.py", "database-py"] def start(s â | 2308.00352#11 | 2308.00352#13 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#13 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Starts the game by adding two numbers to the boar ## Logic Analysis "main.py", "Contains the main game loop and handles user input.â ® main py ® gamepy © suipy Amiing â game.py", "Implements the game logic, including the score, and game over condition.â 2 database py ® test_gamepy test gui py Ve \ ~ e QA Engineer â Action: WriteCodeReview Human direct interaction for gameplay. =.) Think:For this project, we will use Pygame, an open-source library @ &D : designed for making video games with Python, > #4 Code quality review test_guipy Figure 3: A diagram showing the software development process in MetaGPT, emphasizing its sig- nificant dependence on SOPs. The more detailed demonstration can be found in Appendix B. duces a meticulously crafted software solution. We provide a detailed schematic (Figure 3) and a concrete instance (Appendix B) of the SOP workflow in MetaGPT. 3.2 COMMUNICATION PROTOCOL Structured Communication Interfaces Most current LLM-based multi-agent frameworks (Li et al., 2023; Zhuge et al., 2023; Zhang et al., 2023; Park et al., 2023) utilize unconstrained natural language as a communication interface. However, despite the versatility of natural language, a question arises: does pure natural language communication suffice for solving complex tasks? For example, in the telephone game (or Chinese whispers)2, after several rounds of communication, the original information may be quite distorted. Inspired by human social structures, we propose using structured communication to formulate the communication of agents. We establish a schema and format for each role and request that individ- uals provide the necessary outputs based on their specific role and context. As shown in Figure 3, the Architect agent generates two outputs: the system interface design and a sequence flow diagram. These contain system module design and interaction sequences, which serve as important deliverables for Engineers. Unlike ChatDev (Zhao et al., 2023), agents in MetaGPT | 2308.00352#12 | 2308.00352#14 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#14 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | # 2https://en.wikipedia.org/wiki/Chinese whispers 5 # Preprint communicate through documents and diagrams (structured outputs) rather than dialogue. These documents contain all necessary information, preventing irrelevant or missing content. Publish-Subscribe Mechanism Sharing information is critical in collaboration. For instance, Architects and Engineers often need to reference PRDs. However, communicating this information each time in a one-to-one manner, as indicated by previous work (Li et al., 2023; Zhao et al., 2023; Zhang et al., 2023), can complicate the communication topology, resulting in inefficiencies. To address this challenge, a viable approach is to store information in a global message pool. As shown in Figure 2 (left), we introduce a shared message pool that allows all agents to exchange messages directly. These agents not only publish their structured messages in the pool but also access messages from other entities transparently. Any agent can directly retrieve required information from the shared pool, eliminating the need to inquire about other agents and await their responses. This enhances communication efficiency. Sharing all information with every agent can lead to information overload. During task execution, an agent typically prefers to receive only task-related information and avoid distractions through irrelevant details. Effective management and dissemination of this information play a crucial role. We offer a simple and effective solution-subscription mechanism (in Figure 2 (left)). Instead of relying on dialogue, agents utilize role-specific interests to extract relevant information. They can select information to follow based on their role profiles. In practical implementations, an agent activates its action only after receiving all its prerequisite dependencies. As illustrated in Figure 3, the Architect mainly focuses on PRDs provided by the Product Manager, while documents from roles such as the QA Engineer might be of lesser concern. 3.3 # ITERATIVE PROGRAMMING WITH EXECUTABLE FEEDBACK In daily programming tasks, the processes of debugging and optimization play important roles. However, existing methods often lack a self-correction mechanism, which leads to unsuccessful code generation. Previous work introduced non-executable code review and self-reflection (Zhao et al., 2023; Yao et al., 2022; Shinn et al., 2023; Dong et al., 2023). However, they still face challenges in ensuring code executability and runtime correctness. | 2308.00352#13 | 2308.00352#15 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#15 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Our first MetaGPT implementations overlooked certain errors during the review process, due to LLM hallucinations (Manakul et al., 2023). To overcome this, after initial code generation, we introduce an executable feedback mechanism to improve the code iteratively. More specifically, as shown in Figure 2, the Engineer is asked to write code based on the original product requirements and design. This enables the Engineer to continuously improve code using its own historical execution and debugging memory. To obtain additional information, the Engineer writes and executes the corre- sponding unit test cases, and subsequently receives the test results. If satisfactory, additional devel- opment tasks are initiated. Otherwise the Engineer debugs the code before resuming programming. This iterative testing process continues until the test is passed or a maximum of 3 retries is reached. # 4 EXPERIMENTS 4.1 EXPERIMENTAL SETTING Datasets We use two public benchmarks, HumanEval (Chen et al., 2021a) and MBPP (Austin et al., 2021), and a self-generated, more challenging software development benchmark named Soft- wareDev: (1) HumanEval includes 164 handwritten programming tasks. These tasks encompass function specifications, descriptions, reference codes, and tests. (2) MBPP consists of 427 Python tasks. These tasks cover core concepts and standard library features and include descriptions, ref- erence codes, and automated tests. (3) Our SoftwareDev dataset is a collection of 70 representa- tive examples of software development tasks, each with its own task prompt (see Table 5). These tasks have diverse scopes (See Figure 5), such as mini-games, image processing algorithms, data visualization. They offer a robust testbed for authentic development tasks. Contrary to previous datasets (Chen et al., 2021a; Austin et al., 2021), SoftwareDev focuses on the engineering aspects. In the comparisons, we randomly select seven representative tasks for evaluation. | 2308.00352#14 | 2308.00352#16 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#16 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | 6 Preprint Evaluation Metrics For HuamnEval and MBPP, we follow the unbiased version of Pass @k as presented by (Chen et al., 2021a; Dong et al., 2023), to evaluate the functional accuracy of the top-k generated codes: Pass @k = Eprobiems â ( 7. # (nâ c k ) (n k) 1 â For SoftwareDev, we prioritize practical use and evaluate performance through human evaluations (A, E) or statistical analysis (B, C, D): (A) Executability: this metric rates code from 1 (failure/non- functional) to 4 (flawless). â 1â is for non-functional, â 2â for runnable but imperfect, â 3â for nearly perfect, and â 4â for flawless code. (B) Cost: the cost evaluations here include the (1) running time, (2) token usage, and (3) expenses. (C) Code Statistics: this includes (1) code files, (2) lines of code per file, and (3) total code lines. (D) Productivity: basically, it is defined as the number of token usage divided by the number of lines of code, which refers to the consumption of tokens per code line. (E) Human Revision Cost: quantified by the number of rounds of revision needed to ensure the smooth running of the code, this indicates the frequency of human interventions, such as debugging or importing packages. Baselines We compare our method with recent domain-specific LLMs in the code generation field, including AlphaCode (Li et al., 2022), Incoder (Fried et al., 2022), CodeGeeX (Zheng et al., 2023), CodeGen (Nijkamp et al., 2023), CodeX (Chen et al., 2021a), and CodeT (Chen et al., 2022) and general domain LLMs such as PaLM (Chowdhery et al., 2022), and GPT-4 (OpenAI, 2023). Several results of baselines (such as Incoder, CodeGeeX) are provided by Dong et al. (2023). | 2308.00352#15 | 2308.00352#17 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#17 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | We modify certain role-based prompts in MetaGPT to generate code suitable for the target prob- lem (e.g., generate functions instead of classes for HumanEval and MBPP). With the SoftwareDev benchmark, we provide a comprehensive comparison between MetaGPT, AutoGPT (Torantulino et al., 2023), LangChain (Chase, 2022) with Python Read-Eval-Print Loop (REPL) tool3, Agent- Verse (Chen et al., 2023), and ChatDev (Qian et al., 2023). 4.2 MAIN RESULT 5 Humaneval Pass@1 of MBPP and HumanEval (%) poseeesueeesneeenueeeensceeseess [ MBpP i gi7 823 65.8 67.7 05.9 87.7 } 15.2 176 >) >) ) oy 5) A & A > s . & & s S © Kos 6 3 & g ea & x7 OQ og or Kr SO « ee Ks we g FD tS Ye st We e é £ CRS ee e xe . s&s vs & 3 Figure 4: Pass rates on the MBPP and HumanEval with a single attempt. Performance Figure 4 demonstrates that MetaGPT outperforms all preceding approaches in both HumanEval and MBPP benchmarks. When MetaGPT collaborates with GPT-4, it significantly im- proves the Pass @k in the HumanEval benchmark compared to GPT-4. It achieves 85.9% and 87.7% in these two public benchmarks. Moreover, as shown in Table 1, MetaGPT outperforms ChatDev on the challenging SoftwareDev dataset in nearly all metrics. For example, considering the executabil- ity, MetaGPT achieves a score of 3.75, which is very close to 4 (flawless). Besides, it takes less time (503 seconds), clearly less than ChatDev. Considering the code statistic and the cost of human revi- sion, it also significantly outperforms ChatDev. Although MetaGPT requires more tokens (24,613 or 31,255 compared to 19,292), it needs only 126.5/124.3 tokens to generate one line of code. In contrast, ChatDev uses 248.9 tokens. | 2308.00352#16 | 2308.00352#18 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#18 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | These results highlight the benefits of SOPs in collabora- tions between multiple agents. Additionally, we demonstrate the autonomous software generation capabilities of MetaGPT through visualization samples (Figure 5). For additional experiments and analysis, please refer to Appendix C. # 3https://en.wikipedia.org/wiki/Readâ evalâ print loop 7 Preprint | yo Ton fi, a = Pec none S vont Chik a = Figure 5: Demo softwares developed by MetaGPT. # Table 1: The statistical analysis on SoftwareDev. ChatDev Statistical Index MetaGPT w/o Feedback MetaGPT (A) Executability (B) Cost#1: Running Times (s) (B) Cost#2: Token Usage (C) Code Statistic#1: Code Files (C) Code Statistic#2: Lines of Code per File (C) Code Statistic#3: Total Code Lines (D) Productivity (E) Human Revision Cost 2.25 762 19,292 1.9 40.8 77.5 248.9 2.5 3.67 503 24,613 4.6 42.3 194.6 126.5 2.25 3.75 541 31,255 5.1 49.3 251.4 124.3 0.83 # 4.3 CAPABILITIES ANALYSIS Compared to open-source baseline methods such as AutoGPT and autonomous agents such as AgentVerse and ChatDev, MetaGPT offers functions for software engineering tasks. As presented in Table 2, our framework encompasses a wide range of abilities to handle complex and specialized development tasks efficiently. Incorporating SOPs (e.g., role-play expertise, structured communi- cation, streamlined workflow) can significantly improve code generation. Other baseline methods can easily integrate SOP-like designs to improve their performance, similar to injecting chain-of- thought (Wei et al., 2022) in LLMs. 4.4 ABLATION STUDY The Effectiveness of Roles To understand the impact of different roles on the final results, we perform two tasks that involve generating effective code and calculating average statistics. When we exclude certain roles, unworkable codes are generated. As indicated by Table 3, the addition of roles different from just the Engineer consistently improves both revisions and executability. | 2308.00352#17 | 2308.00352#19 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#19 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | While more roles slightly increase the expenses, the overall performance improves noticeably, demonstrating the effectiveness of the various roles. 8 Preprint Table 2: Comparison of capabilities for MetaGPT and other approaches. â /° indicates the presence of a specific feature in the corresponding framework, â X° its absence. Framework Capabiliy AutoGPT LangChain AgentVerse ChatDev MetaGPT PRD generation Tenical design genenration API interface generation Code generation Precompilation execution Role-based task management Code review x x v x x x *x* Â¥} NX & N\& N&O NAN SAN & SANNA Table 3: Ablation study on roles. â #â denotes â The number ofâ , â Productâ denotes â Product man- agerâ , and â Projectâ denotes â Project managerâ . â /â indicates the addition of a specific role. â Revi- sionsâ refers to â Human Revision Costâ . Engineer Product Architect Project | #Agents #Lines Expense Revisions Executability Engineer Product Architect Project | #Agents #Lines Expense Revisions Executability The Effectiveness of Executable Feedback Mechanism As shown in Figure 4, adding executable feedback into MetaGPT leads to a significant improvement of 4.2% and 5.4% in Pass @1 on Hu- manEval and MBPP, respectively. Besides, Table 1 shows that the feedback mechanism improves feasibility (3.67 to 3.75) and reduces the cost of human revisions (2.25 to 0.83). These results illustrate how our designed feedback mechanism can produce higher-quality code. Additional quan- titative results of MetaGPT and MetaGPT without executable feedback are shown in Table 4 and Table 6. # 5 CONCLUSION This work introduces MetaGPT, a novel meta-programming framework that leverages SOPs to en- hance the problem-solving capabilities of multi-agent systems based on Large Language Models (LLMs). MetaGPT models a group of agents as a simulated software company, analogous to simu- lated towns (Park et al., 2023) and the Minecraft Sandbox in Voyager (Wang et al., 2023a). | 2308.00352#18 | 2308.00352#20 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#20 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | MetaGPT leverages role specialization, workflow management, and efficient sharing mechanisms such as mes- sage pools and subscriptions, rendering it a flexible and portable platform for autonomous agents and multi-agent frameworks. It uses an executable feedback mechanism to enhance code generation quality during runtime. In extensive experiments, MetaGPT achieves state-of-the-art performance on multiple benchmarks. The successful integration of human-like SOPs inspires future research on human-inspired techniques for artificial multi-agent systems. We also view our work as an early attempt to regulate LLM-based multi-agent frameworks. See also the outlook (Appendix A). # Acknowledgement We thank Sarah Salhi, the Executive Secretary of KAUST AI Initiative, and Yuhui Wang, Postdoc- toral Fellow at the KAUST AI Initiative, for helping to polish some of the text. We would like to express our gratitude to Wenyi Wang, a PhD student at the KAUST AI Initiative, for providing com- prehensive feedback on the paper and for helping to draft the outlook (Appendix A) with Mingchen. We also thank Zongze Xu, the vice president of DeepWisdom, for providing illustrative materials for AgentStore. | 2308.00352#19 | 2308.00352#21 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#21 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | 9 Preprint # Author Contributions Sirui Hong conducted most of the experiments and designed the executable feedback module. She also led the initial version of the write-up, supported by Ceyao Zhang, and also by Jinlin Wang and Zili Wang. Mingchen Zhuge designed the self-improvement module, discussed additional experi- ments, and led the current write-up. Jonathan Chen helped with the MBPP experiments, outlined the methods section, and contributed to the current write-up. Xiawu Zheng provided valuable guid- ance, reviewed and edited the paper. Yuheng Cheng contributed to the evaluation metric design and HumanEval experiments. Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Lingfeng Xiao helped with the MBPP experiments and comparisons to open-source baseline methods. Chenyu Ran cre- ated most of the illustrative figures. Chenglin Wu is the CEO of DeepWisdom, initiated MetaGPT, made the most significant code contributions to it, and advised this project. J¨urgen Schmidhuber, Director of the AI Initiative at KAUST and Scientific Director of IDSIA, advised this project and helped with the write-up. | 2308.00352#20 | 2308.00352#22 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#22 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | # REFERENCES Elif Akata, Lion Schulz, Julian Coda-Forno, Seong Joon Oh, Matthias Bethge, and Eric Schulz. Playing repeated games with large language models. arXiv preprint, 2023. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. Program synthesis with large language models, 2021. Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, Andrew Goff, Jonathan Gray, Hengyuan Hu, et al. Human-level play in the game of diplomacy by com- bining language models with strategic reasoning. | 2308.00352#21 | 2308.00352#23 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#23 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Science, 378(6624):1067â 1074, 2022. Robert Balzer. A 15 year perspective on automatic programming. IEEE Transactions on Software Engineering, 11(11):1257â 1268, 1985. R.M. Belbin. Team Roles at Work. Routledge, 2012. URL https://books.google.co.uk/ books?id=MHIQBAAAQBAJ. Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. Large language models as tool makers. arXiv preprint, 2023. | 2308.00352#22 | 2308.00352#24 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#24 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | # Harrison Chase. LangChain. https://github.com/hwchase17/langchain, 2022. Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen. Codet: Code generation with generated tests, 2022. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fo- tios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob Mc- Grew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. | 2308.00352#23 | 2308.00352#25 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#25 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Evaluating large language models trained on code, 2021a. Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chen Qian, Chi-Min Chan, Yujia Qin, Yaxi Lu, Ruobing Xie, Zhiyuan Liu, Maosong Sun, and Jie Zhou. Agentverse: Facili- tating multi-agent collaboration and exploring emergent behaviors in agents, 2023. Xinyun Chen, Chang Liu, and Dawn Song. Execution-guided neural program synthesis. In ICLR, 2018. Xinyun Chen, Dawn Song, and Yuandong Tian. Latent execution for neural program synthesis beyond domain-specific languages. NeurIPS, 2021b. | 2308.00352#24 | 2308.00352#26 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#26 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | 10 Preprint Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Lev- skaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Bren- nan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. | 2308.00352#25 | 2308.00352#27 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#27 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Palm: Scaling language modeling with pathways, 2022. T. DeMarco and T.R. Lister. Peopleware: Productive Projects and Teams. Addison-Wesley, 2013. URL https://books.google.co.uk/books?id=DVlsAQAAQBAJ. Yihong Dong, Xue Jiang, Zhi Jin, and Ge Li. Self-collaboration code generation via chatgpt. arXiv preprint, 2023. Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate, 2023. Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. Codebert: A pre-trained model for programming and natural languages. arXiv preprint arXiv:2002.08155, 2020. Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, and Tim Rockt¨aschel. arXiv preprint Promptbreeder: Self-referential self-improvement via prompt evolution. arXiv:2309.16797, 2023. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning, pp. 1126â | 2308.00352#26 | 2308.00352#28 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#28 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | 1135. PMLR, 2017. Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, and Mike Lewis. Incoder: A generative model for code infilling and synthesis. arXiv preprint arXiv:2204.05999, 2022. Irving John Good. Speculations concerning the first ultraintelligent machine. Adv. Comput., 6: 31â 88, 1965. doi: 10.1016/S0065-2458(08)60418-0. URL https://doi.org/10.1016/ S0065-2458(08)60418-0. | 2308.00352#27 | 2308.00352#29 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#29 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Rui Hao, Linmei Hu, Weijian Qi, Qingliu Wu, Yirui Zhang, and Liqiang Nie. Chatllm network: More brains, more intelligence. arXiv preprint, 2023. S. Hochreiter, A. S. Younger, and P. R. Conwell. Learning to learn using gradient descent. In Lecture Notes on Comp. Sci. 2130, Proc. Intl. Conf. on Artificial Neural Networks (ICANN-2001), pp. 87â 94. Springer: Berlin, Heidelberg, 2001. Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents forâ mindâ exploration of large scale language model society. arXiv preprint, 2023. Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, R´emi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation with alphacode. Science, 2022. Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. Encouraging divergent thinking in large language models through multi- agent debate. arXiv preprint, 2023. Bill Yuchen Lin, Yicheng Fu, Karina Yang, Prithviraj Ammanabrolu, Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. | 2308.00352#28 | 2308.00352#30 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#30 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Swiftsage: A generative agent with fast and slow thinking for complex interactive tasks. arXiv preprint, 2023. 11 Preprint Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M Dai, Diyi Yang, and Soroush Vosoughi. Training socially aligned language models in simulated human society. arXiv preprint arXiv:2305.16960, 2023. Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. Wizardcoder: Empowering code large language models with evol-instruct. arXiv preprint arXiv:2306.08568, 2023. | 2308.00352#29 | 2308.00352#31 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#31 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Potsawee Manakul, Adian Liusie, and Mark JF Gales. Selfcheckgpt: Zero-resource black-box hallu- cination detection for generative large language models. arXiv preprint arXiv:2303.08896, 2023. Agile Manifesto. Manifesto for agile software development. Snowbird, UT, 2001. John McCarthy. History of lisp. In History of programming languages, pp. 173â 185. 1978. | 2308.00352#30 | 2308.00352#32 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#32 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Ansong Ni, Srini Iyer, Dragomir Radev, Veselin Stoyanov, Wen-tau Yih, Sida Wang, and Xi Victoria Lin. Lever: Learning to verify language-to-code generation with execution. In ICML, 2023. Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. Codegen: An open large language model for code with multi-turn program synthesis, 2023. OpenAI. Gpt-4 technical report, 2023. | 2308.00352#31 | 2308.00352#33 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#33 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Joon Sung Park, Joseph C Oâ Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint, 2023. Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong Sun. Communicative agents for software development, 2023. Baptiste Rozi`ere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, J´er´emy Rapin, et al. | 2308.00352#32 | 2308.00352#34 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#34 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023. Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint, 2023. J. Schmidhuber. A self-referential weight matrix. In Proceedings of the International Conference on Artificial Neural Networks, Amsterdam, pp. 446â 451. Springer, 1993a. J. Schmidhuber. G¨odel machines: self-referential universal problem solvers making provably optimal self-improvements. Technical Report IDSIA-19-03, arXiv:cs.LO/0309048 v3, IDSIA, Manno-Lugano, Switzerland, December 2003. J. Schmidhuber. G¨odel machines: Fully self-referential optimal universal self-improvers. In B. Go- ertzel and C. Pennachin (eds.), Artificial General Intelligence, pp. 199â 226. Springer Verlag, 2006. Variant available as arXiv:cs.LO/0309048. J. Schmidhuber. Ultimate cognition `a la G¨odel. Cognitive Computation, 1(2):177â 193, 2009. J¨urgen Schmidhuber. | 2308.00352#33 | 2308.00352#35 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#35 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-... hook. PhD thesis, Technische Universit¨at M¨unchen, 1987. J¨urgen Schmidhuber. A â self-referentialâ weight matrix. In ICANNâ 93: Proceedings of the Interna- tional Conference on Artificial Neural Networks Amsterdam, The Netherlands 13â 16 September 1993 3, pp. 446â 450. Springer, 1993b. J¨urgen Schmidhuber. | 2308.00352#34 | 2308.00352#36 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#36 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | On learning to think: Algorithmic information theory for novel combina- tions of reinforcement learning controllers and recurrent neural world models. arXiv preprint arXiv:1511.09249, 2015. J¨urgen Schmidhuber, Jieyu Zhao, and Nicol N Schraudolph. Reinforcement learning with self- modifying policies. In Learning to learn, pp. 293â 309. Springer, 1998. 12 Preprint Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint, 2023. Marta Skreta, Naruki Yoshikawa, Sebastian Arellano-Rubach, Zhi Ji, Lasse Bjørn Kristensen, Kourosh Darvish, Al´an Aspuru-Guzik, Florian Shkurti, and Animesh Garg. Errors are useful prompts: Instruction guided task programming with verifier-assisted iterative prompting. arXiv preprint, 2023. | 2308.00352#35 | 2308.00352#37 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#37 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Elliot Soloway. Learning to program = learning to construct mechanisms and explanations. Com- munications of the ACM, 29(9):850â 858, 1986. Torantulino et al. Auto-gpt. https://github.com/Significant-Gravitas/ Auto-GPT, 2023. R. J. Waldinger and R. C. T. Lee. PROW: a step toward automatic program writing. In D. E. Walker and L. M. Norton (eds.), Proceedings of the 1st International Joint Conference on Artificial Intel- ligence (IJCAI), pp. 241â 252. Morgan Kaufmann, 1969. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. | 2308.00352#36 | 2308.00352#38 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#38 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Voyager: An open-ended embodied agent with large language models. arXiv preprint, 2023a. Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al. A survey on large language model based autonomous agents. arXiv preprint arXiv:2308.11432, 2023b. Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. Unleashing cognitive synergy in large language models: A task-solving agent through multi-persona self- collaboration. arXiv preprint, 2023c. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. NeurIPS, 2022. In Pro- ceedings of the Second International Conference on Autonomous Agents, 1998. URL https: //doi.org/10.1145/280765.280867. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. arXiv preprint, 2022. Eric Zelikman, Eliana Lorch, Lester Mackey, and Adam Tauman Kalai. Self-taught optimizer (stop): Recursively self-improving code generation. arXiv preprint arXiv:2310.02304, 2023. Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B Tenenbaum, Tian- min Shu, and Chuang Gan. Building cooperative embodied agents modularly with large language models. arXiv preprint, 2023. Xufeng Zhao, Mengdi Li, Cornelius Weber, Muhammad Burhan Hafez, and Stefan Wermter. Chat with the environment: Interactive multimodal perception using large language models. arXiv preprint, 2023. | 2308.00352#37 | 2308.00352#39 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#39 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Zihan Wang, Lei Shen, Andi Wang, Yang Li, Teng Su, Zhilin Yang, and Jie Tang. Codegeex: A pre-trained model for code generation with multilingual evaluations on humaneval-x, 2023. Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, et al. | 2308.00352#38 | 2308.00352#40 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#40 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Webarena: A realistic web environment for building autonomous agents. arXiv preprint, 2023. Mingchen Zhuge, Haozhe Liu, Francesco Faccio, Dylan R Ashley, R´obert Csord´as, Anand Gopalakrishnan, Abdullah Hamdi, Hasan Abed Al Kader Hammoud, Vincent Herrmann, Kazuki Irie, et al. Mindstorms in natural language-based societies of mind. arXiv preprint arXiv:2305.17066, 2023. | 2308.00352#39 | 2308.00352#41 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#41 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | 13 Preprint # A OUTLOOK A.1 SELF-IMPROVEMENT MECHANISMS One limitation of the MetaGPT version in the main text of this paper is that each software project is executed independently. However, through active teamwork, a software development team should learn from the experience gained by developing each project, thus becoming more compatible and successful over time. This is somewhat related to the idea of recursive self-improvement, first informally proposed in 1965 (Good, 1965), with first concrete implementations since 1987 (Schmidhuber, 1987; 1993b; Schmidhuber et al., 1998), culminating in the concept of mathematically optimal self-referential self-improvers (Schmidhuber, 2003; 2009). Generally speaking, a system should learn from experi- ence in the real world, and meta-learn better learning algorithms from experiences of learning, and meta-meta-learn better meta-learning algorithms from experiences of meta-learning, etc., without any limitations except those of computability and physics. More recent, somewhat related work leverages the reasoning ability of Large Language Models (LLMs) and recursively improves prompts of LLMs, to improve performance on certain downstream tasks (Fernando et al., 2023; Zelikman et al., 2023), analogous to the adaptive prompt engineer of 2015 (Schmidhuber, 2015) where one neural network learns to generate sequence of queries or prompts for another pre-trained neural network whose answers may help the first network to learn new tasks more quickly. In our present work, we also explore a self-referential mechanism that recursively modifies the con- straint prompts of agents based on information they observe during software development. Our initial implementation works as follows. Prior to each project, every agent in the software company reviews previous feedback and makes necessary adjustments to their constraint prompts. This en- ables them to continuously learn from past project experiences and enhance the overall multi-agent system by improving each individual in the company. We first establish a handover feedback action for each agent. This action is responsible for critically summarizing the information received dur- ing the development of previous projects and integrating this information in an updated constraint prompt. The summarized information is stored in long-term memory such that it can be inherited by future constraint prompt updates. | 2308.00352#40 | 2308.00352#42 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#42 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | When initiating a new project, each agent starts with a react action. Each agent evaluates the received feedback and summarizes how they can improve in a constraint prompt. One current limitation is that these summary-based optimizations only modify constraints in the specialization of roles (Sec. 3.1) rather than structured communication interfaces in communication protocols (Sec. 3.2). Future advancements are yet to be explored. # A.2 MULTI-AGENT ECONOMIES In real-world teamwork, the interaction processes are often not hardcoded. For example, in a soft- ware company, the collaboration SOP may change dynamically. One implementation of such self-organization is discussed in the paper on a â Natural Language- Based Society of Mindâ (NLSOM) (Zhuge et al., 2023), which introduced the idea of an â Economy of Mindsâ (EOM), a Reinforcement Learning (RL) framework for societies of LLMs and other agents. Instead of using standard RL techniques to optimize the total reward of the system through modifications of neural network parameters, EOMs use the principles of supply and demand in free markets to assign credit (money) to those agents that contribute to economic success (reward). The recent agent-based platform of DeepWisdom (AgentStore4) is compatible with the credit as- signment concept of EOMs. Each agent in AgentStore provides a list of services with corresponding costs. A convenient API is provided so that human users or agents in the platform can easily pur- chase services from other agents to accomplish their services. Figure 6 displays the User Interface (UI) of AgentStore, where various agents with different skills are showcased. Besides, individual developers can participate in building new agents and enable collaborative development within the community. Specifically, AgentStore allows users to subscribe to agents according to their demands # 4http://beta.deepwisdom.ai 14 # Preprint | 2308.00352#41 | 2308.00352#43 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#43 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | and pay according to their usage. Moreover, users can purchase additional capabilities to expand the plug-and-play functions of their existing agents. This allows users to gradually upgrade their agents. Within the MetaGPT framework, AgentStore can support the collaboration of various agents. Users can collect several agents together to carry out more complex tasks or projects, and all the agents share and comply with development and communication protocols defined in MetaGPT. © Conversations 2a aQ2asrzeeetewk® B&B a Ff Marin Minsky Wednesday Plato Software Com, ML Engineer cartoon 30 ComicArist Tutorial Assist. EqutyAnalyst. Beauty Photo. â CeficatePh._-â _Donalé Tru Tutorial Asset. @ Agents ible #3 â cartoon 30 a £ Certicate Ph. Beauty Photo EI celebritiesâ Planet Al) Westerncelebrty Movie TV. | 2308.00352#42 | 2308.00352#44 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#44 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Historica Figure John Coffey eat > tat» a Wednesday Addams cat > Figure 6: AgentStore is a platform dedicated to serving users in the creation and development of agents within the MetaGPT framework. This platform provides users with an operational interface, allowing users to easily manage a variety of agents with different emotions, personalities, and capa- bilities for specific tasks. 15 Preprint B A DEMO OF THE EXECUTION In this section, we outline the complete process of software development using MetaGPT. It begins with a userâ s input command (as shown in Appendix B.1) and ends with software designed according to the userâ s specifications. B.1 USER INPUT Upon receiving an instruction from the user, MetaGPT collaborates with a professional development team to fulfill the task. Here is a demo of user input: User input 1 write a python3 GUI app such that you can draw an image with it | 2308.00352#43 | 2308.00352#45 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#45 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | B.2 METAGPT DEVELOPMENT PROCESS Now we provide a step-by-step explanation of the standardized output process for each agent. 16 # Preprint Product Manager The Product Manager generates a Product Requirement Document (PRD), as detailed in the specified documentation. This document encompasses goals, user stories, competitive analysis, requirement analysis and requirement pool. Additionally, a competitive quadrant chart is produced (see Figure 7). Subsequently, these documents and charts are handed over to the architect for system design. # Product Requirement Document ## Original Requirements The boss requires a Python3 GUI color meter that outputs the RGB values when it moves. ## Product Goals ***python [ "Create a user-friendly GUI color meter", "Ensure the color meter accurately outputs RGB values", "Ensure the color meter updates RGB values in real-time as it movesâ ## User Stories ***python "As a user, I want to select any color on the screen, so that I can get its RGB values", "As a user, I want the RGB values to update in real-time as I move the color meter, so that I <> can get accurate color information", "As a user, I want a simple and intuitive interface, so that I can easily use the color meter" # Product Requirement Document Product Requirement Document ## Competitive Analysis ***python [ "Color Cop: A popular color picker for Windows. | 2308.00352#44 | 2308.00352#46 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#46 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Itâ s simple and easy to use, but lacks real- <>» time RGB value updates", "Just Color Picker: Offers real-time color picking and multiple color code formats. However, <» its interface is cluttered and not very user-friendly", "Instant Eyedropper: A simple color picker that lives in the system tray. Itâ s convenient but <» lacks advanced features", "ColorZilla: A color picker extension for browsers. Itâ s convenient for web design but not <>» suitable for desktop applications", "ColorPic: Offers color palettes and mixer tools. | 2308.00352#45 | 2308.00352#47 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#47 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Itâ s feature-rich but can be overwhelming <» for simple tasks" ## Requirement Analysis The product should be a Python3 GUI application that serves as a color meter. It should allow <> users to select any color on the screen and display the corresponding RGB values. The RGB <> values should update in real-time as the color meter moves. The interface should be <> simple and user-friendly. ## Requirement Pool ***python [ ("Design a user-friendly GUI for the color meter", "PO"), ("Implement color selection functionality", "P0O"), ("Display RGB values of the selected color", "PO"), ("Update RGB values in real-time as the color meter moves", "PO"), ("Test the application for accuracy and performance", "P1") ## UI Design draft The UI should be minimalistic and intuitive. It should primarily consist of a color meter, which <>» could be a magnified area of the screen that the user can move around. The RGB values of <>» the currently selected color should be prominently displayed, updating in real-time as <>» the color meter moves. The layout should be clean and uncluttered, with a focus on <â functionality and ease of use. ## Anything UNCLEAR There are no unclear points. | 2308.00352#46 | 2308.00352#48 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#48 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | 17 Preprint Reach and engagement of campaigns Need to promote We should expand Core! Painter 5 â Adobe SPotoshop â ¬ 8 3 3 Krita SketchBook D> 2 wi e < Our Target ProdBtiiP 2 tT ° Re-evaluatePaint.NET May be improved = as Baint oo â ¬ © D oO D> 2 in = co} 4S Low Reach High Reach Figure 7: The quadrant chart for the â Drawing Appâ | 2308.00352#47 | 2308.00352#49 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#49 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | generated by the product manager agent. Architect Based on the requirements in PRD, the Architect agent devises technical specifications including system architecture diagrams and interface definitions. Initially, the Architect defines the overarching technical trajectory. Subsequently, the projectâ s architecture, including files, classes (Figure 8) and the sequence flow chart (Figure 12), is designed. The Architectâ s documentation is then given to the project manager for task allocation and execution. System design ## Implementation approach We will use Pythonâ s Tkinter library to create the GUI for the color meter. Tkinter is a standard Python interface to the Tk GUI toolkit and is included with most Python installations. | 2308.00352#48 | 2308.00352#50 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#50 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | It â + is simple to use and powerful enough for our needs. The color selection functionality will be implemented using the PIL (Pillow) library, which allows â us to capture the screenâ s pixel data. By moving the mouse, we can get the RGB values of â + the pixel under the mouse cursor in real-time The RGB values will be displayed on the GUI using Tkinterâ s Label widget. We will use Python's © threading library to update the RGB values in real-time as the color meter moves The application will be tested for accuracy and performance using Pythonâ s unittest library ## Python package name *â spython "color_meter" ## File list â â spython C "main.py", "color_picker.py", "gui-py", "tests.py" ## Anything UNCLEAR The requirement is clear to me 18 Preprint | 2308.00352#49 | 2308.00352#51 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#51 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | ColorPicker +get_rgb_at(x: int, y: int) : Tuple[int, int, int] uses GUI +__init__(color_picker: ColorPicker) +update_rgb_label(rgb: Tuple[int, int, int]) +start() Figure 8: Data structures and interface definitions for the â Drawing Appâ generated by the architect agent. Project Manager The Project Manager breaks down the project into a task list. Furthermore, each code file is analyzed based on its intended functionality and then treated as a separate task assigned to Engineers. | 2308.00352#50 | 2308.00352#52 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#52 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | main ColorPicker GUI Instantiate >» Instantiate(CP) 7 start() â â â SSSSS SS Ss User moves mouse get_rgb_at(x, y) update_rgb_label(rgb) main ColorPicker GUI Figure 9: The program call flow for the â Drawing Appâ generated by the architect agent. 19 # Preprint # Tasks ## Required Python third-party packages ***python now tkinter==8.6 Pillow==8.3.1 won Vv ## Required Other language third-party packages ***python now No third-party packages required in other languages. now Vv ## Full API spec ***python now No APIs are used in this project. now Vv ## Logic Analysis ***python [ ("main.py", "Contains the main application loop and initializes all the other components."), ("canvas.py", "Implements the drawing canvas, responds to mouse events, and uses the selected <+ tool and color."), ("tools.py", "Implements the tool selection interface."), ("color_picker.py", â Implements the color selection interface."), ("file_manager.py", â Implements the file saving and opening functionality.") Product Requirement Document | 2 3 4 aADM -â -OwO eo ROS ## Task list ***python [ "main.py", "canvas.py", "tools.py", "color_picker.py", "file_manager.py"â ## Shared Knowledge ***python now The â Tkinterâ library is used for creating the GUI of the application. It provides various widgets <>» like frames and canvases which are used in this project. | 2308.00352#51 | 2308.00352#53 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#53 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | The â Pillowâ library is used for handling image files. It is used in the â file_manager.pyâ for <+ saving and opening image files. The â main.pyâ file is the entry point of the application. It initializes all the other components <>» and starts the application loop. ## Anything UNCLEAR The project requirements and implementation approach are clear. However, we need to ensure that <> all team members are familiar with the Tkinter and Pillow libraries. If not, they may <> need some time to learn these libraries before starting the development. | 2308.00352#52 | 2308.00352#54 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#54 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | 20 # Preprint Engineer Given the provided file structure and function definitions, an Engineer agent requires only fundamental development skills to complete the development tasks. Due to the large number of files, we present only one auto-generated code file here. Generated code (color picker.py) 1 import tkinter as tk 2 from tkinter import colorchooser 3 class ColorPicker: 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 def __init__(self, root: tk.Tk, color: str = â | 2308.00352#53 | 2308.00352#55 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#55 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | blackâ ): self.frame = tk.Frame(root) self.color = color self.color_button = tk.Button(self.frame, text=â Colorâ , command=self.select_color) self.color_button.pack(fill=â xâ ) def select_color(self): color = colorchooser.askcolor()[1] if color is not None: self.color = color def pack(self, **kwargs): self.frame.pack(**kwargs) def get_color(self): return self.color QA Engineer Upon receiving the code output from the Engineer, the QA Engineer generates unit test code and reviews it to identify and fix any bugs, ensuring high-quality software. # Generated Unit tests (test.py) 1 import unittest 2 import tkinter as tk 3 # You might want to move the ColorPicker class to another file, letâ s say color_picker.py 4 # from color_picker import ColorPicker 5 6 class TestColorPicker(unittest.TestCase): 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 if __name__ == â __main__â : unittest.main() 28 Output Ultimately, as shown in Figure 10, MetaGPT generates a functional application named â Drawing Appâ . 21 Preprint Drawing App Colors Color Pencil Brush Eraser a- Figure 10: The â Drawing Appâ | 2308.00352#54 | 2308.00352#56 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#56 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | generated by MetaGPT. C EXPERIMENTS C.1 DETAILS OF THE SOFTWAREDEV DATASET The SoftwareDev dataset includes 70 diverse software development tasks. Table 5 displays the names and detailed prompts of 11 tasks within the dataset. Note that the first seven tasks listed are used in the main experiments of this paper. C.2 ADDITIONAL RESULTS Quantitative results of MetaGPT As shown in Table 4, MetaGPT achieves an average score of 3.9, surpassing ChatDevâ s score of 2.1 Zhao et al. (2023), which is based on the Chat chain. Compare the scores of general intelligent algorithms, including AutoGPT Torantulino et al. (2023), which all score 1.0, failing to generate executable code. We observe that the generated code is often short, lacks comprehensive logic, and tends to fail to handle cross-file dependencies correctly. While models such as AutoGPT (Torantulino et al., 2023), Langchain (Chase, 2022), and Agent- Verse (Chen et al., 2023) display robust general problem-solving capabilities, they lack an essential element for developing complex systems: systematically deconstructing requirements. Conversely, MetaGPT simplifies the process of transforming abstract requirements into detailed class and func- tion designs through a specialized division of labor and SOPs workflow. When compared to Chat- Dev (Zhao et al., 2023), MetaGPTâ s structured messaging and feedback mechanisms not only reduce loss of communication information but also improve the execution of code. Quantitative results of MetaGPT w/o executable feedback Table 6 presents the performance of MetaGPT with GPT-4 32K on 11 tasks within the SoftwareDev dataset. It also shows the average performance across all 70 tasks (in the last line). Note that the version of MetaGPT used here is the basic version without the executable feedback mechanism. Qualitative results Figure 11 and Figure 12 illustrate the outcomes of the Architect agentâ s ef- forts to design a complex recommender system. These figures showcase the comprehensive system interface design and program call flow. The latter are essential for creating a sophisticated auto- mated system. It is crucial to emphasize the importance of this division of labor in developing an automated software framework. | 2308.00352#55 | 2308.00352#57 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#57 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | 22 Preprint Table 4: Executability comparison. The executability scores are on a grading system ranging from â 1â to â 4â . A score of â 1â signifies complete failure, â 2â denotes executable code, â 3â represents largely satisfying expected workflow, and â 4â indicates a perfect match with expectations. Task Flappy bird Tank battle game 2048 game Snake game Brick breaker game Excel data process CRUD manage Average score AutoGPT 1 1 1 1 1 1 1 1.0 LangChain 1 1 1 1 1 1 1 1.0 AgentVerse 1 1 1 1 1 1 1 1.0 ChatDev MetaGPT 2 2 1 3 1 4 2 2.1 3 4 4 4 4 4 4 3.9 ContentBasedFiltering Model . Advertising Privacy y +DataFrame data \ +int advertiser_id +User user +str model_type \ +str target_audience \ + init_(user: User): -> _init_(user_id: int, user_profile: UserProfile, + init (advertiser_id: int, target_audience: str): -> +ensure_privacy(): -> f_model: CollaborativeFilteringModel, 174 cbf_model: ContentBasedFilteringModel) -> | | Content User eT +int content id ae name 4str ttle | mal 4+str description +str category | SETS EC EA +DateTime published_at | / (/ +-init_(user_id: int, name: str,email: str): -> +_init_(content_id: int, tite: str, description: str, category: str, published_at: DateTime) :-> _â | i/ I~ [a â | â â | 2308.00352#56 | 2308.00352#58 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#58 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | 1 User Profile Collaborative FilteringModel _ Feedback tint user_id +dict preferences +DataFrame data +int user_id ++dict history +str model_type +int content id +int rating +_init_(user_id: int, +predict(user_id: int, \ preferences: dict, nLrecommendations: int) -> Listlint] + init (user_id: int, content_id: int, rating: int) : -> history: dict) -> fl Recommender tint user_id _â â || #UserProfile user_profile â 4Collaborative Filtering Model cf_model 1 +ContentBasedFiltering Model cbf_model +_init_(user_id: int, user_profile: UserProfile, cf_model: CollaborativeFilteringModel, cbf_model: ContentBasedFilteringModel) -> â | 2308.00352#57 | 2308.00352#59 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#59 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | ON â ¢ aA \i 1 Experimentation Platform Optimization Monitoring +List{Recommender] recommenders +Recommender recommender +Recommender recommender +_init_(recommenders: List[Recommender]) : -> +_init (recommender: Recommender) : -> _+_init_ (recommender: Recommender): -> -+run_experiment(user recommendations: int): -> Dictlstr, Listlint]] +optimize() : -> +generate_report() : -> Figure 11: The system interface design for â recommendation engine developmentâ generated by the architect agent (zoom in for a better view). | 2308.00352#58 | 2308.00352#60 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#60 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | 23 Preprint User UserProfile Recommender RST SRN TRASH eee Feedback Optimization Monitoring Privacy Advertising create UserProfile iatali create Recommender â fit model â ht model run_experiment() get_recommendations() â I" predict() predict() submit feedback update models " Ft model fit model optimize() > update models I generate_report() ensure_privacy() | ehaurete jacy() â 4 User UserProfle Recommender ey er ead Feedback Optimization Monitoring Privacy Advertising Figure 12: The program call flow for â recommendation engine developmentâ generated by the architect agent (zoom in for a better view). | 2308.00352#59 | 2308.00352#61 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#61 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | 24 Preprint Table 5: Examples of SoftwareDev dataset. Task ID Task 0 1 2 3 Snake game Brick breaker game 2048 game Flappy bird game 4 5 Tank battle game Excel data process 6 CRUD manage 7 Music transcriber Prompt Create a snake game. Create a brick breaker game. Create a 2048 game for the web. Write p5.js code for Flappy Bird where you control a yellow bird continu- ously flying between a series of green pipes. The bird flaps every time you left click the mouse. If it falls to the ground or hits a pipe, you lose. This game goes on indefinitely until you lose; you get points the further you go. Create a tank battle game. | 2308.00352#60 | 2308.00352#62 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#62 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Write an excel data processing program based on streamlit and pandas. The screen first shows an excel file upload button. After the excel file is uploaded, use pandas to display its data content. The program is required to be concise, easy to maintain, and not over-designed. It uses streamlit to process web screen displays, and pandas is sufficient to process excel reading and display. Please make sure others can execute directly without introducing additional packages. Write a management program based on the crud addition, deletion, modifi- cation and query processing of the customer business entity. The customer needs to save this information: name, birthday, age, sex, and phone. The data is stored in client.db, and there is a judgement whether the customer table ex- ists. | 2308.00352#61 | 2308.00352#63 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#63 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | If it doesnâ t, it needs to be created first. Querying is done by name; same for deleting. The program is required to be concise, easy to maintain, and not over-designed. The screen is realized through streamlit and sqliteâ no need to introduce other additional packages. Develop a program to transcribe sheet music into a digital format; provid- ing error-free transcribed symbolized sheet music intelligence from audio through signal processing involving pitch and time slicing then training a neural net to run Onset Detected CWT transforming scalograms to chroma- grams decoded with Recursive Neural Network focused networks. 8 9 Gomoku game Custom press releases Create custom press releases; develop a Python script that extracts rele- vant information about company news from external sources, such as social media; extract update interval database for recent changes. The program should create press releases with customizable options and export writings to PDFs, NYTimes API JSONs, media format styled with interlink internal fixed character-length metadata. | 2308.00352#62 | 2308.00352#64 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#64 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | Implement a Gomoku game using Python, incorporating an AI opponent with varying difficulty levels. Create a Python program to develop an interactive weather dashboard. 10 25 Preprint e r a s k s a t d e t c e l e s y l m o d n a r 0 1 d n a d e t a l u c l a c e r a s k s a t 0 7 f o ) . g v A ( s e g a r e v A . v e D e r a w t f o S n o k c a b d e e f . â D I k s a T â o / w T P G a t e s i â D I â | 2308.00352#63 | 2308.00352#65 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#65 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | e l i h w M e r u p y t i l i b a t u c e x e e d o C n o i s i v e r f o t s o C s c i t s i t a t s t s o C s c i t s i t a t s c o D s t s o c y e n o m s t s o c e m i t s n e k o t n o i t e l p m o c # s n e k o t t p m o r p # e l fi c o d r e p s e n i l # c o d f o s e n i l # s e l fi c o d # e l fi e d o c 4 r o r r E e p y T . 1 9 0 1 $ . 4 0 . 2 8 5 0 0 . 7 5 1 6 0 0 . 7 8 0 4 2 0 0 . 0 7 0 0 . 0 1 2 0 0 . 3 0 2 . 9 3 4 r o r r E e p y T . 1 5 3 1 $ . 0 3 . 6 6 5 0 0 . 8 3 2 6 0 0 . 7 1 5 2 3 7 6 . 6 7 0 0 . 0 3 2 0 0 . 3 3 8 . 1 3 3 k c a l . 1 4 0 1 $ . 1 1 . 3 5 5 0 0 . 6 1 3 6 0 0 . 4 3 9 1 2 3 3 . 8 7 0 0 . 5 3 2 0 0 . 3 0 0 . 6 6 ) â / â | 2308.00352#64 | 2308.00352#66 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#66 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | ( e t u o r . p p a @ 2 - s s i m e l fi G N P . 1 1 0 1 $ . 4 3 . 1 8 4 0 0 . 2 1 3 5 0 0 . 1 5 9 2 2 3 3 . 7 6 0 0 . 2 0 2 0 0 . 3 0 8 . 2 3 g u b e l i p m o C . 2 g n i s e x fi 3 e l fi G N P . 1 0 3 1 $ . 8 5 . 9 9 5 0 0 . 7 6 5 6 0 0 . 7 8 0 0 3 0 0 . 0 7 0 0 . 0 1 2 0 0 . 3 3 8 . 3 3 - m o C . 2 g n i s s i m . 3 s e x fi g u b e l i p t o n e c a f r u s . e m a g y p e z i l a i t i n i 4 - r e y c n e d n e p e d - t o N e l u d o M . 2 . 1 r o r 1 5 1 $ . 0 1 . 5 8 5 0 0 . 6 3 3 7 0 0 . 0 9 5 5 3 0 0 . 6 9 0 0 . 4 9 2 0 0 . 3 0 5 . 6 3 r o r r E d n u o F 4 0 0 9 0 $ . 3 8 . 8 9 3 0 0 . 2 3 8 5 0 0 . 3 7 6 5 2 0 0 . 7 8 0 0 . 1 6 2 0 0 . 3 5 2 . 8 1 4 0 2 9 0 $ . 3 8 . 5 3 4 0 0 . 4 0 1 7 0 0 . 9 3 1 9 2 7 6 . 0 1 1 0 0 . 2 3 3 0 0 . 3 0 0 . 9 7 2 - r e v w o fl r o s n e t . 1 7 2 1 $ . 3 7 . 1 2 6 0 0 . 9 9 4 6 0 0 . 2 7 3 9 2 3 3 . 0 0 1 0 0 . 1 0 3 0 0 . 3 0 0 . 3 4 l e d o m . 2 r o r r e n o i s t o n d o h t e m g n i n i a r t t n e m e l p m i 3 - r e y c n e d n e p e d . 1 7 2 1 $ . 8 8 . 0 5 5 0 0 . 4 3 7 5 0 0 . 9 9 7 4 2 0 0 . 0 9 0 0 . 0 7 2 0 0 . 3 0 0 . 3 4 - r e 3 0 4 L R U . 2 r o r r o r 4 - r e y c n e d n e p e d . 1 2 9 0 $ . 0 5 . 8 3 4 0 0 . 3 6 3 5 0 0 . 9 0 1 4 2 7 6 . 4 8 0 0 . 4 5 2 0 0 . 3 0 0 . 1 3 n i a m g n i s s i m . 2 r o r . c n u f 6 3 3 . r e d i s n o c y l n o ( 1 5 0 . 2 1 1 $ . 1 7 6 1 5 . 0 0 . 8 1 2 6 6 8 . 6 2 6 6 2 0 0 . 0 8 0 0 . 0 4 2 0 0 . 3 8 9 . 2 4 r o 3 , 2 d e r o c s m e t i ) 4 | 2308.00352#65 | 2308.00352#67 | 2308.00352 | [
"2308.12950"
]
|
2308.00352#67 | MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework | # f o # s t l u s e r l a n o i t i d d A : 6 e l b a T # , â f o r e b m u n # e h T â # s e t o n e d â # â # . d e d u l c n i s c i t s i t a t s # e d o C # D # I r e p s e n i l # e d o c # f o s e n i l # s e l fi e d o c # 0 0 . 6 9 1 0 0 . 5 0 0 0 . 1 9 1 0 0 . 6 1 0 0 . 8 9 1 0 0 . 3 2 4 6 1 0 0 . 5 3 0 0 . 3 0 2 0 0 . 6 4 0 0 . 9 1 2 0 0 . 6 5 0 0 . 3 7 0 0 . 4 6 0 0 . 6 1 3 0 0 . 4 7 0 0 . 5 1 2 0 0 . 5 8 0 0 . 5 1 2 0 0 . 5 9 0 0 . 3 9 0 0 . 3 0 1 26 7 5 . 1 9 1 1 7 . 4 . g v A | 2308.00352#66 | 2308.00352 | [
"2308.12950"
]
|
|
2308.00245#0 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | 3 2 0 2 v o N 5 1 ] E S . s c [ 3 v 5 4 2 0 0 . 8 0 3 2 : v i X r a # The Hitchhikerâ s Guide to Program Analysis: A Journey with Large Language Models Haonan Li [email protected] UC Riverside Riverside, California, USA Yu Hao [email protected] UC Riverside Riverside, California, USA Yizhuo Zhai [email protected] UC Riverside Riverside, California, USA Zhiyun Qian [email protected] UC Riverside Riverside, California, USA | 2308.00245#1 | 2308.00245 | [
"2305.10601"
]
|
|
2308.00245#1 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | ABSTRACT Static analysis is a widely used technique in software engineering for identifying and mitigating bugs. However, a significant hurdle lies in achieving a delicate balance between precision and scalabil- ity. Large Language Models (LLMs) offer a promising alternative, as recent advances demonstrate remarkable capabilities in com- prehending, generating, and even debugging code. Yet, the logic of bugs can be complex and require sophisticated reasoning and a large analysis scope spanning multiple functions. Therefore, at this point, LLMs are better used in an assistive role to complement static analysis. In this paper, we take a deep dive into the open space of LLM-assisted static analysis, using use-before-initialization (UBI) bugs as a case study. To this end, we develop LLift, a fully auto- mated framework that interfaces with both a static analysis tool and an LLM. By carefully designing the framework and the prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large problem scope, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates a potent capability, showcasing a reasonable preci- sion (50%) and appears to have no missing bug. It even identified 13 previously unknown UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in using LLMs for bug discovery in extensive, real-world datasets. 1 INTRODUCTION Static analysis is a popular technique in software engineering, par- ticularly in the area of bug discovery, that can improve code quality, reliability, and security. However, the effectiveness of these tech- niques is influenced by the fundamental trade-off between precision and scalability, especially when dealing with extensive and com- plex programs [9, 24]. On the one hand, static analysis solutions with lower precision tend to generate numerous false positives. On the other hand, expensive static analysis or symbolic execution solutions with higher precision often struggle to complete the anal- ysis. Consequently, achieving comprehensive and accurate static program analysis for sizable programs like the Linux kernel poses a significant challenge. UBITect [40], a powerful static analysis solution illustrates these inherent limitations thoroughly. | 2308.00245#0 | 2308.00245#2 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#2 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Targeting Use-Before-Initialization (UBI) bugs in the Linux kernel, it packages a pipeline of (1) a scalable bottom-up summary-based static analysis with limited precision, and (2) a precise symbolic execution with limited scalability. The solution illuminates the need for alternative strategies to navigate the complex trade-offs between precision and scalability effectively. Despite this strategic combination of analysis techniques, nearly 40% of the potential bugs reported from the static analysis phase experience a timeout or memory exhaustion during the static sym- bolic execution phase, preventing any conclusive results on such cases. This limitation hinders the overall effectiveness of the tool, leading to the potential of two distinct outcomes: missed bugs if these potential bug reports are ignored (what UBITect performs), or false positives if they are sent to developers for inspection. In this paper, we investigate the possibility of leveraging Large Language Models (LLMs) as an alternative to handle such â difficult casesâ . This is because recent LLMs have exhibited strong potential in understanding, generating, and even debugging code [4, 8, 13]. Nevertheless, navigating the intricacies of utilizing LLMs for bug discovery proves to be a complex feat. The technical report on GPT-4 underscores this challenge, admitting that when it comes to discovering new vulnerabilities, it may not be the best solution standalone [21]: â ... is less effective than existing tools for complex and high-level activities like novel vulnerability identificationâ | 2308.00245#1 | 2308.00245#3 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#3 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | . In the same vein, prior research demonstrates the competence of LLMs mostly in simpler tasks or programs [1, 25, 26]. This is because LLMs are far from perfect. For instance, they suffer from hallucination [11] where instead of identifying the bugs in faulty code, LLMs may create non-existent facts in an attempt to rationalize the original intention behind the problematic code [17, 31]. Another issue is the stochasticity of LLMs which can result in inconsistent or outright incorrect results, thus throwing another wrench into the gears of bug discovery [41]. Finally, LLMs have limited context windows, meaning they can only scrutinize a relatively small codebase. In response, we propose LLift, a fully automated framework that bridges static analysis with LLMs in analyzing UBI bugs. Our solution packages several novel components. First, LLift performs post-constraint guided path analysis, which helps verify the path feasibility of the â | 2308.00245#2 | 2308.00245#4 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#4 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | useâ of an initialized variable, a difficult task for static analysis and symbolic execution. Second, to efficiently in- teract with LLMs, we employ task decomposition to break down the analysis into more than a single step. Third, we employ pro- gressive prompting by providing information incrementally only when necessary, instead of providing an enormous scope of code at once. Finally, we propose self-validation by requesting LLMs to 1 static int libcfs_ip_str2addr(...){ unsigned int a, b, c, d; if (sscanf(str, "%u.%u.%u.%u%n", 8a, &b, &c, &d, &n) >= 4){ // use of a, b, c, d 3 3 int sscanf(const char buf, const char «fmt, ...){ va_list args; int i; va_start(args, fmt); i = vsscanf(buf, fmt, args); va_end(args); # Figure 1: Code snippet of sscanf and its usecase Table 1: UBITectâ s summary for sscanf. Both use and initialization for va_args are incorrect. â and â stand for whether this parameter will be used/initialized after its call. â ...â represents all other parameters of va_args. buf fmt ... *buf *fmt Use Initialize â â â â â â â â â â | 2308.00245#3 | 2308.00245#5 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#5 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | review responses at various stages to obtain accurate and reliable responses. We implement a prototype of LLift and test it in real-world scenarios. Focusing on the inconclusive cases of UBITect caused by time or memory limitation, LLift successfully identifies 13 pre- viously unknown UBI bugs in the Linux kernel that we confirmed with the Linux community. With 26 positive reports out of nearly 1,000 cases, LLift reaches a high precision of 50%. We also test LLift against all previously known bugs found by UBITect, and observe a recall of 100%. | 2308.00245#4 | 2308.00245#6 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#6 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | # e # e # e ¢ We summarize our contributions as follows: New Opportunities. We introduce a novel approach to static analysis that enhances its precision and scalability at the same time by harnessing the capabilities of LLMs. To the best of our knowledge, we are the first to use LLMs to assist static analysis in bug-finding tasks with large-scale and real-world datasets. New Methodologies. We develop LLift, an innovative and fully automated framework that arms static analysis with LLMs. LLift employs several prompt strategies to engage with LLMs, eliciting accurate and reliable responses. Results. We rigorously investigate LLift by conducting an in- depth analysis of nearly 1000 cases, resulting in a reasonable precision rate (50%). Additionally, our examination led to the discovery of 13 previously unknown bugs. | 2308.00245#5 | 2308.00245#7 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#7 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Open source. Committed to open research, we will publicly release all of our code and data, fostering further exploration of the new space of LLM-assisted program analysis. 2 BACKGROUND & MOTIVATION 2.1 UBITect and Motivating Example UBITect is a state-of-the-art static analysis solution aiming at find- ing Use Before Initialization (UBI) bugs in the Linux kernel [40]. It employs a two-stage pipeline where the first stage employs a bottom-up summary-based static analysis of the Linux kernel. By design, this stage aims for scalability and sacrifices precision, pro- 140k), most of ducing a significant number of potential bugs (i.e., | 2308.00245#6 | 2308.00245#8 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#8 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | â ¼ 2 Haonan Li, Yu Hao, Yizhuo Zhai, and Zhiyun Qian which are false alarms. The static analysis is imprecise partly due to its lack of path sensitivity (often needed to discover UBI bugs). It is complemented by a second stage of static symbolic execution that filters as many false alarms as possible by verifying their path feasibility. However, 40% of the reported bugs are discarded due to timeout (10 minutes) or memory limitations (2 GB) during the symbolic execution, potentially missing genuine bugs. Figure 1 shows a case where UBITectâ s static analysis stage considers it a potential UBI bug (a false alarm) and the subsequent symbolic execution stage times out and fails to generate a definitive conclusion. In other words, UBITect failed to rule out this case as a false alarm. As Table 1 presents, the static analysis stage generates a summary of sscanf() as â may not initialize parameters a, b, c, and dâ but does use them at Line 3. Consequently, the static analysis stage reports two locations of use-before-initialization at Line 3 and Line 4, respectively. There are two reasons for the static analysis stage to consider the case a potential bug: 1) inability to recognize special functions: For soundness, UBITect assumed the va_start() is a normal function. However, since it cannot find its definition, it has to conservatively assume that the arguments passed to it will be used inside. Unfortunately, in reality, va_start is a compiler built-in function that simply â preparesâ the arguments without any uses. 2) insensitivity of path constraints: It fails to recognize the path constraint, i.e., if(sscanf(...)>=4), which ensures its arguments a to d must be initialized before use. 2.2 Practical Challenges of Static Analysis In light of our motivating example of the sscanf() case, we can summarize the reasons for UBITectâ s failure as follows: Inherent Knowledge Boundaries. Developers need to model specific functions or language features. Otherwise, they influence the correctness of the results. For compiler built-in functions, e.g., va_start(), their definitions are simply not available. Beyond this example, there exists an array of other scenarios, which are par- ticularly prevalent in the Linux kernel. | 2308.00245#7 | 2308.00245#9 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#9 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | These situations include assembly code, hardware behaviors, callback functions, concur- rency, and compiler built-in functions. However, in practical terms, it is often time-consuming to discover and model all these cases, because they can be highly dependent on the analysis target and evolve over time. This limitation often compromises the effective- ness of static analysis, leaving it less precise and comprehensive than desired. Exhaustive Path Exploration. Correctly handling cases like sscanf() requires it to consider the check: sscanf(...)>=4. Unfortunately, existing path-sensitive static analysis (and symbolic execution) techniques operate under a methodical but exhaustive paradigm, exploring all potential execution paths through the codebase. While this approach is theoretically comprehensive, it often leads to a combinatorial explosion. The vast array of execution paths necessitates the exploration of myriad functions, many of which ultimately prove irrelevant to the specific analysis task at hand. In the sscanf() case, its return value is computed inside an unbounded loop when iterating over an unknown string variable buf. This causes UBITectâ s symbolic execution to time out exactly due to this problem. The Hitchhikerâ s Guide to Program Analysis: A Journey with Large Language Models UBITect result 40% undecided Symbolic Execution Static Analysis LLift 60% successful execution Figure 2: The overview of LLift. Start with the discarded cases by UBITect and determine whether these potential bugs are true or false. | 2308.00245#8 | 2308.00245#10 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#10 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | O UBITect result 40% undecided Symbolic Execution Static Analysis LLift 60% successful execution Specifically, they detail the guarantees a routine offers based on its observable outcomes. . These outcomes are defined as updates to its parameters (and return value) for a path of ð . Particularly, does not include initialization for variables for convenience. In the study of UBI bug, for a routine ð that can yield a set of outcomes # , the postcondition O Pð : # P must_init ð (2) S( ) â O Ã Here, ð | 2308.00245#9 | 2308.00245#11 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#11 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | , set of variables that must be initialized. Motivating Example. Consider the sscanf() function in our mo- tivating example. Based on these return values, the postconditions assure the initialization of certain variables: Figure 3: A typical type of potential UBI bug. For each suspicious variable ð , we expect it to 1) have an initializer function that probably initializes ð and 2) use ð . 0, must_init 1, must_init 2, must_init 3, must_init 4, must_init 5, must_init ð ð ð ¡â 1) ð ð ð ¡â 2) ð ð ð ¡â 3) ð ð ð ¡â 4) ð ð ð ¡â 5) ð ð ð ¡â 6) ð ð ð ¡ { ð ð ð ¡ { ð ð ð ¡ { ð ð ð ¡ { ð ð ð ¡ { ð ð ð ¡ { : P ( â ¦â â ¦â â } ð â ¦â { : P ( â ¦â }} ð , ð : P ( â ¦â â ¦â { }} ð , ð , ð : P ( â ¦â â ¦â { }} ð , ð , ð , ð : P ( â ¦â â ¦â { }} ð , ð , ð , ð , ð : P ( â ¦â â ¦â { | 2308.00245#10 | 2308.00245#12 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#12 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | 2.3 Capability of LLMs Fortunately, LLMs [21] offers a promising alternative to summa- rizing code behaviors [22] in a flexible way and bypassing the aforementioned challenges. This is because LLMs are trained and aligned with extensive datasets that include both natural language and programs. Specifically, we observe that LLMs possess funda- mental abilities that assist in addressing each challenge: 1) domain- specific code recognition and 2) smart code summarization. Domain-specific Programming Constructs Recognition. This proficiency is showcased in three key areas: 1) Function Recogni- tion: LLMs can identify frequently used interfaces in the Linux ker- nel from its semantics, such as sscanf(), kzalloc(), kstrtoul(), and â list for eachâ , simplifying the analysis and making the analysis more scalable. 2) Function pointers and callbacks: LLMs can accurately interpret complex uses of function pointers as callbacks, which often require manual modeling. We will show an interesting case in §6.6. }} ð ð ð ¡â 6 represent different possible paths in the Here, the ð ð ð ¡â 1 â | 2308.00245#11 | 2308.00245#13 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#13 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | sscanf() and each path corresponds with a different postcondition. For UBI detection, not every associated postcondition is relevant; reachable are critical. ð ð ð ð ¡ [? ]. The ( Pð ð ¢ð ð , is a subset of Pð ð ¢ð ð = # P # C # P | C 4, â ¥ the qualified postcondition would be , which ensures that variables a, b, c, and d must be initialized; therefore, all variables used subsequently are initialized, and no UBI happens. In subsequent discussions, unless otherwise specified, the term Smart Code Summarization. LLMs can work with complicated functions; for example, that they can summarize loop invariants [26], which is an inherently difficult task in program analysis. This is likely because it has been trained on various functions with loops and their semantics. In contrast, traditional static analysis follows explicitly defined rules without a limited ability to generalize. | 2308.00245#12 | 2308.00245#14 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#14 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | â postconditionâ shall denote â qualified postconditionâ . 3.2 Post-Constraint Guided Path Analysis When analyzing a routine or function in a path-sensitive manner, the number of paths to explore can grow rapidly. Fortunately, if we have information about what the function is expected to achieve ð ð ð ð ¡ ), we can prune paths that inherently donâ t meet (given by those expectations. We categorize two scenarios, direct applica- tion and outcome conflicts, in applying this optimization. ð 3 PROBLEM FORMULATION 3.1 Definitions and Scope 3.1.1 Use-Before-Initialization. A Use Before Initialization (UBI) bug refers to the erroneous scenario where a variable ð £ is accessed or involved in any operation prior to its correct initialization. Let: e d(v) represent the declaration of v. ð £ ( ) ð £ ( ) ð £ ) ( if there exists ð # ð ð ¢ ð represent the declaration of ð £. signify a use operation involving ð £. denote the initialization operation of ð £. â ¢ â ¢ be its path set. Let ð ð ð ¡â refer to a specific path in ð . Besides, Each path ð ð ð ¡â has an associated path constraint ð that dictates its feasibility. These two optimizations can be formed with: Direct Application. For direct application, the post-constraint ð ð ð ð | 2308.00245#13 | 2308.00245#15 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#15 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | ¡ can be directly applied as a path constraint. A path can be ð £ and ð ¢ ð £ , then ð £ is used before initialization if: ( ) < ð ¢ ( ) < ð < ð ¢ ð £ : ð £ ð £ ð £ : ð ð £ ð £ ð £ (1) # ð ( ð )) â § ¬(â C discarded if: ( where < indicates a temporal sequence in the program execution. â ( ) ( ) ( ) ( ) ( )) 3.1.2 Postcondition. Postconditions encapsulate the expected state or behavior of a system upon the conclusion of a routine [18]. | 2308.00245#14 | 2308.00245#16 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#16 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | ð ð ð ð ¡â ð ð ð ð ¡ Â¬( ( ) â § C ) 3 qualified postcondition, of P refined by Cpost: This implies that if a ð ð ð ¡â inherently contradicts the post-constraint, it can be removed from consideration. Outcome Conflicts. Let denote the set of all outcomes or effects produced by path ð . A path can be pruned if any of its outcomes conflict with the post-constraint: ð ¬( This stipulates that if an outcome from ð ð ð ¡â | 2308.00245#15 | 2308.00245#17 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#17 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | inherently con- tradicts the post-constraint, that path can be disregarded in the analysis. Correctness. The validity of these optimization methods can be proved by contradiction. Consider an instance where one of these Cpost, it would paths is executed. If this path conflicts with the render ð ¢ unreachable. Thus, it becomes evident that such paths can be pruned without sacrificing the correctness of the analysis. We provide a concrete example of how we perform these opti- # ð â â O ( ) # â § C ) mizations in §4.3.3. 3.3 Conceptual Workflow Given a bug report containing a suspicious variable ð £ and its resid- ing function ð ¹ , the workflow Φ is as follows: ð £ ð ( ) â { the bug report. ð ¹, ð ð £ )) â C for each ð . ) ( ð £ ð ¹, C ) (1) Φ1 ( (2) Φ2 ( (3) Φ3 ( ð £ , : Summarize the ini- InitStatus tialization status for variable ð £ after all possible initializers completion (merge multiple initializers). Decision Policy. The decision policy Î is defined as: Î InitStatus ð £ : non-bug # = must_init ) â must_init ) # ( InitStatus # ( ð £ ) Î : potential bug ( ( ) | 2308.00245#16 | 2308.00245#18 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#18 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | In this policy, we adopt a conservative approach by treating all variables not explicitly marked as must_init as potential vulnerabili- ties. And it is worth noting that this policy may introduce some false positives. For example, it might over-approximate preconditions. Conceptually, LLift will not miss more bugs. The post-constraint guided path optimizations and decision policies are safe. 3.4 Turns and Conversations in LLMs We define two key concepts in interacting with LLMs: turn and conversation. Turn: A turn encapsulates a singular interaction with the LLM. , where ð represents the Formally, itâ s defined as a tuple, problem or question, and ð denotes the LLMâ s response. Conversation: Leveraging the capabilities of LLMs often necessitates a series of interactions, especially for complex problem-solving. A conversation is an ordered sequence of turns. A conversation comprising ð turns can be expressed as ð 1, ð 1) 4 DESIGN In Section §3.3, we introduced a conceptual workflow. Elaborating on that foundation, Figure 4 showcases a compelling illustration of our methodological approach. Yet, translating this workflow into | 2308.00245#17 | 2308.00245#19 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#19 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | ð 2, ð 2) [( ( ( )] Haonan Li, Yu Hao, Yizhuo Zhai, and Zhiyun Qian UBITect reports potential use- before-initialization bugs unsigned int Af (sscanf(str, fa, 8b, 8C,_ 8d. // use of a, static int libcfs_ip_str2addr(...){ b,c, dj} â Identify the initializer: sscanf } anoneai Extract the post-constraint: int sscanf(... ret>=4 va_start(args, fmt); i = vsscanf(buf, fmt, args); va_end(args); Analyze the initializer with post-constraint guidance Figure 4: Example run of LLift. For each potential bug, LLift â (Φ1) identifies its initializer, â ¡ (Φ2) extracts the post-constraints of the initializer, and â ¢ (Φ3) analyzes the behavior of the initializer with the post-constraints via LLM. practice presents its challenges. Even with the advanced knowledge and analytical capabilities of cutting-edge LLMs, achieving optimal results remains a challenge. Throughout the development of LLift, we identified several obstacles and subsequently introduced four distinct design components to effectively address these challenges. 4.1 Design Challenges It is non-trivial to prompt LLMs effectively [28, 41]. We meet the following challenges and propose solutions correspondingly in designing LLift. | 2308.00245#18 | 2308.00245#20 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#20 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | # e # e # e C1. Limited Understanding of Post-constraint. Despite LLMs (e.g., GPT-4) are able to comprehend the definition of post-constraint and apply them in simple scenarios, we found their capacity to utilize this knowledge in actual program analysisâ such as summarizing function behavior in line with specific post-constraint â to be limited. This critical limitation often results in unpredictable and inconsistent outcomes. C2. Token Limitations. It is known that LLMs have token limitations. For example, GPT-3.5 supports 16k tokens and GPT-4 supports 32k tokens [20]. This means that we do not want to copy a large number of function bodies in our prompts to LLMs. | 2308.00245#19 | 2308.00245#21 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#21 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | C3. Unreliable and Inconsistent Response. LLMs are known to result in unreliable and inconsistent responses due to halluci- nation and stochasticity [41]. Stochasticity refers to the inherent unpredictability in the modelâ s outputs [32]; and the halluci- nation refers to LLMs generating nonsensical or unfaithful re- sponses [11, 42]. By design, the stochasticity can be mitigated with lower temperature, a hyperparameter controlling the degree of randomness in outputs [27]; however, reducing temperature may impair the modelâ s exploring ability [37] and therefore may miss corner cases that result in vulnerabilities. 4.2 Design Overview We will discuss our design strategies to address the above chal- lenges in the rest of the section. Before that, we provide a high-level overview of our solution. To tackle challenge C1 (Post-constraint), we propose to encode (D#1) Post-Constraint Guided Path Analysis by teaching LLMs with examples, or few-shot in-context learning, of post- constraints. This approach enables LLMs to learn from a small number of demonstrative examples, assimilate the underlying patterns, and apply this understanding to process post-constraint guidance in our analysis. | 2308.00245#20 | 2308.00245#22 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#22 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | 4 The Hitchhikerâ s Guide to Program Analysis: A Journey with Large Language Models 0.1; Initializer & Post-constraints Extraction - Prompt LLMs to extract the initializer & post-constraints Prompt LLMs to summarize the initializer Â¥ Perform Progressive Prompt Â¥ with requested information Perform Self-validation with previous response ed more If succeed es en A Perform Self-validation with previous response Â¥ Â¥ Conclude the previous response in JSON Conclude the previous response in JSON Figure 5: The workflow of LLift. Given a potential bug, we let LLM first iden- tify the initializer and then extract its post-constraints (Convo.1), then leverage them to summarize the behavior of the initializer (Convo.2). A conversation consists of prompts (boxes) and responses (edges). To tackle challenge C2 (Token Limitation), We employ two strate- gies: (D#2) Progressive Prompt. Instead of copying a large num- ber of function bodies (i.e., subroutines), we only provide function details on demand, i.e., when LLMs are not able to conduct a re- sult immediately. (D#3) Task Decomposition. We break down the problem into sub-problems that can be solved in independent conversations, i.e., a sequence of prompt and response pairs. To tackle challenge C3 (Unreliable Response), we employ the following strategies: (D#4) Self-Validation. We ask LLMs to review and correct their previous responses. This helps improve the consistency and accuracy based on our observation. Besides, (D#2) Progressive Prompt and (D#3) Task Decomposition also help to deal with this challenge. Additionally, we implement majority voting by running each case multiple times and use majority voting to combat stochasticity. We elaborate the design of (D#1 - #4) Post Constraint Guided Path Analysis, Progressive Prompts, Task Decomposition, and Self-Validation detailed in the rest of this section. The effectiveness and efficiency of these design strategies are rigorously evaluated in §6.4, revealing a substantial enhancement in bug detection within the Linux kernel. # 4.3 Design #1: Post-Constraint Guided Path Analysis | 2308.00245#21 | 2308.00245#23 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#23 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | The Linux kernel frequently employs return value checks as illus- trated in Table 2. Through our detailed examination of non-bug instances, we found that a path-sensitivity analysis can effectively eliminate over 70% of these negative cases. However, path-sensitive static analysis usually suffers from path explosion, especially in large-scale codebases like the Linux kernel. Fortunately, we can prompt the LLM to collect ð ð ð ð ¡ and sum- ð ð ð ð ¡ . It is worth noting C marize the function with respective to the # C 5 Table 2: Two types of post-constraints and their variants. Check Before Use Failure Check Type A: if (sscanf(...) >= 4) { err = func(&a); use(a, b, c, d); if (err) { return/break/goto; } } use(a) Type B: Type Aâ : switch(ret=func(&a)){ case some_irrelevant_case: Type Bâ : while(func(&a)){ do_something(...); do_something(...); } break; use(a); case critical_case: use(a); | 2308.00245#22 | 2308.00245#24 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#24 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | that current LLMs (e.g., GPT-4) are not natively sensitive to the sen- sitivity; without any additional instructions, LLMs usually overlook the post-constraints. Therefore, we teach the LLM to be sensitive to post-constraints rules through few-shots in-context learning. We describe the design details as follows: 4.3.1 Post-Constraints Extraction. To extract the qualified postcon- dition, we first determine the post-constraints that lead to the use of suspicious variables. We incorporate few-shot in-context learn- ing to teach LLMs how to extract such constraints from the caller context. Table 2 demonstrates how we teach LLM with in-context learning. We focus primarily on two types of code patterns: | 2308.00245#23 | 2308.00245#25 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#25 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | # e # e Check Before Use. Type A is our motivating example; by look- ing at its check, the post-constraint should be ð ð ð ¡ 4. Type Aâ describes a similar case with switch-cases, with expected output ð ð ð ¡ Failure Check. This pattern captures the opposite of the first pattern. They commonly occur in the Linux kernel where the error conditions cause the use to become unreachable, as illus- trated in Type B, the post-constraint is ð ð ð 0. Type Bâ | 2308.00245#24 | 2308.00245#26 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#26 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | depicts a variant where the initializer keeps retrying til success, and therefore with expected output ð ð ð ¡ 0, which indicates its first successful execution to break the endless loop. â ¦â 4.3.2 Function Behavior Summarization. Once we obtain the post- contraints in Convo.1, we feed them to the LLM to obtain the behav- ior summary in Convo.2 . For example, we provide the following: { "initializer": "ret = sscanf(str,'%u.%u.%u.%u%n',&a,&b,&c,&d,&n)", "suspicious": ["a", "b", "c", "d"], "postconstraint": "ret >= 4" } | 2308.00245#25 | 2308.00245#27 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#27 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | The LLM may respond with { "ret": "success", "response": { "must_init": ["a", "b", "c", "d"], "may_init": [{"name":"n", "condition": "ret > 4"}] } } The response succinctly encapsulates the function behavior, where variables a,b,c,d are classified as must_init, and n is cat- egorized as may_init. This is due to the initialization of n only occurring when ð ð ð ¡ > 4, and not when ð ð ð ¡ â ¦â 1 2 3 4 5 6 must_init = or â ¤ some_condi â {¬ ð â must_init = if: â ð ð ð ð ¡ = ð ð int func(int* a){ if(some_condi) return -1; C â : ð ð ð ð ð ð ¡ } : ð â ¥ C ð ð ð ð ¡ 0 } if: ð ð ð ¡ â ¦â ð } ) â § C ð ð ð ð ¡ â { â ¥ C *a = ... // init return 0; { some_condi 0 ð ð ð ð ¡ or } â § â ¦â Figure 6: | 2308.00245#26 | 2308.00245#28 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#28 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | A sample case of initializer func, *a is may_init or must_init under different post-constraints. Note that this seemingly simple interaction with LLMs can be challenging for static analysis or symbolic execution. Consider the sscanf() example, even if the analysis is aware that the qualified postcondition should be limited to those where ð ð ð ¡ 4, it would still need to enumerate the paths inside of sscanf(), which involves loops and can easily lead to timeouts as explained in §2.1. 4.3.3 Apply Path Analysis. Following §3.2, Figure 6 presents a con- cert example of post-constraint guided path analysis. This case of the variable ð . Given an early shows a simple initializer ð return, the initialization in line 4 may not be executed. As such, the qualified postconditions become contingent on the post-constraints ð ð ð ð ¡ . There are: If the use of variable a is unconditional, i.e., . In this case, the variable ð is labeled as may_init given that the initial- ization may not be reached. In general, if all path constraints and outcomes of must_init are disjoint from ð ð ð ð | 2308.00245#27 | 2308.00245#29 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#29 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | ¡ , no path can be pruned out. We could also conclude ð as may_init. If the use of variable ð is conditional with constraints, i.e., # e ð ð ð ð ¡ â C # e , two cases emerge: â ¤ (1) ð ð ð ð ¡ clashes with the constraints of the path (e.g., C some_condi), or (2) ð ð ð ð ¡ could be some_condi or func(...)==0 In these instances, C and we can designate *a as must_init. 4.4 Design #2: Progressive Prompt The Linux kernel has an extremely large codebase. Summarizing an initializer using LLMs without providing any supplementary function definitions can result in incomplete or erroneous responses. On the other hand, flooding the LLM with every relevant function definition upfront risks exceeding their context window limitations. To address this dilemma, we choose to progressively provide function definitions as needed. Illustrated in Figure 5, this approach, which we refer to as Progressive Prompt, fosters a dynamic inter- action with the LLM rather than expecting a response in one shot. Throughout this iterative exchange, we consistently prompt the LLM: â If you encounter uncertainty due to a lack of function defini- tions, please signal your need, and Iâ ll supply themâ . Should the LLM need more information, LLift will promptly extract the relevant details on demand from the source code and provide it to the LLM automatically, enabling it to reassess and generate a more accurate response. Specifically, We teach the LLM to ask for more information with a specific format: | 2308.00245#28 | 2308.00245#30 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#30 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | [{"type":"function_def", "name":"some_func" }] 6 Haonan Li, Yu Hao, Yizhuo Zhai, and Zhiyun Qian Subsequently, LLift scans this format in the LLMâ s response. For each requested function definition, LLift supplies its correspond- ing code along with comments extracted from the Linux source code. Though GPT-4 may seek other types of information beyond function definitions (e.g., struct definitions), we currently limit our support to requests pertaining to function definitions. The iterative process continues until either the LLM no longer re- quests additional information, or LLift cannot supply the requested details. In certain situations where LLift is unable to provide more information (e.g., the definition of an indirect call), LLift will still prompt the LLM to proceed with the analysis. In these instances, the LLM is encouraged to infer the behavior based on the available data and its inherent knowledge, thereby facilitating continued analysis even when not all information is directly accessible. 4.5 Design #3: Task Decomposition We systematically apply the principle of task decomposition, a vital element of our design process. This concept is incorporated primarily in two distinct ways. Multistage Problem Solving. As illustrated in Figure 5, we em- ploy a two-conversation approach to complete the task. Each con- versation, essentially consists of multiple iterations of prompts and responses. The first conversation (Convo.1) is dedicated to extracting the initializer and its associated post-constraints (sub- tasks 1 and 2), while the second conversation (Convo.2) focuses on summarizing the function (subtask 3) based on the previously identified post-constraints. This division allows a more manageable and effective way of achieving the task, compared to combining all three subtasks into a single conversation. The efficacy of this task decomposition approach is further evaluated in §6.5. | 2308.00245#29 | 2308.00245#31 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#31 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Thinking in English. Our workflow necessitates a structured output, such as a JSON format, for automation. However, we ob- serve that LLMs often produce suboptimal results when directly prompted to output in this format. As LLMs build responses incre- mentally, word-by-word, based on preceding outputs [32], direct prompts to output JSON may interrupt their thought progression. This emphasizes the importance of initially soliciting responses in natural language to ensure comprehensive and effective reasoning. Consequently, we instruct the LLM to first articulate their thought processes in English, followed by a subsequent prompt to transform their response into a JSON summary. 4.6 Design #4: Self-Validation At times, LLMs can display unpredictable or inconsistent behav- iors, particularly in complex scenarios involving detailed logical constructs. Consider a case where an initializer carries the postcon- dition must_init if ð ð ð ¡ 0. LLMs may still mistakenly assume it to be may_init, despite the explicit presence of the post-constraint ð ð ð ¡ â ¦â | 2308.00245#30 | 2308.00245#32 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#32 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Conversely, an LLM might erroneously interpret a non-existent post-constraint and incorrectly infer a may_init case as must_init. This phenomenon is known as hallucination. Essentially, the hallu- cination can lead to both false positives and false negatives in bug detection, thereby affecting accuracy and reliability. In addition to task decomposition, we also introduce the concept of self-validation to enhance reliability. Before the LLM reaches its The Hitchhikerâ s Guide to Program Analysis: A Journey with Large Language Models final conclusion, this method reinforces specific rules, allowing the LLM to reassess their previous responses for adherence and make necessary corrections. We observed that this practice yields better results. We evaluate the effect of self-validation in §6.4. As seen in Figure 5, we employ self-validation in both conversa- tions. By prompting a list of correct properties that we expect, LLMs can verify and correct their results by themselves automatically. 4.7 Additional Prompting Strategies In order to further optimize the efficacy of our model, we have incorporated several additional strategies into our prompt design: Chain-of-Thought. Leveraging the Chain-of-Thought (CoT) approach, we encourage the LLMs to engage in stepwise reason- ing, using the phrase â think step by stepâ . This not only helps generate longer, comprehensive responses, but it also provides intermediate results at each juncture of the thought process. Pre- vious studies suggest the CoT approach considerably enhances the LLMsâ reasoning capabilities [3]. We incorporate the CoT strategy into every prompt. Source Code Analysis. Rather than analyzing abstract repre- sentations, we opt to focus our attention directly on the functions within the source code. This approach not only economizes on token use compared to LLVM IR, but also allows the model to leverage the semantic richness of variable names and other pro- gramming constructs to conduct a more nuanced analysis. There are still some interesting details in designing an effective prompt but due to space constraints and without changing the overall strategy, we will not list them all. Readers intrigued can delve into the intricacies of our open-sourced prompt1 design and experimental implementations to gain a deeper understanding. 5 IMPLEMENTATION We implement the prototype of LLift based on OpenAIâ s API [19] (i.e., gpt-4-0613). We describe some implementation details in the following aspects: | 2308.00245#31 | 2308.00245#33 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#33 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | Interaction with LLMs. LLiftâ s interaction with LLMs is managed by a simple agent developed in Python, containing roughly 1,000 lines of code. In addition, it uses seven prompts, which altogether constitute about 2,000 tokens in two conversations. All interactions are fully automated via APIs of OpenAI. Besides sending prompts and waiting for responses, our agent also 1) interacts with LLMs according to the progressive prompt design, 2) locates function definitions within the Linux source code, and 3) processes responses from LLMs, then receives and stores to a database. Hyper-Parameters. There are several hyper-parameters in calling the APIs provided by OpenAI. We choose max_token and temperature to 1,024 and 1.0, respectively. max_token controls the output length; since LLMs always predict the next words by the previous output, the longer output can benefit and allow its reasoning. However, too many tokens will exhaust the context window quickly, so we pick 1024 as a reasonable balance. The temperature controls the randomness and also the ability to reason. Intuitively, we want the analysis to be as non-random as possible and reduce the temperature (it can take a value between 0 | 2308.00245#32 | 2308.00245#34 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#34 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | 1https://sites.google.com/view/llift-open/prompt 7 and 2 for GPT models); however, an overly low temperature can result in repetitive or overly simplistic responses. We set it to 1.0 (also the default of gpt-4-0613), which allows for higher-quality responses, and use strategies such as self-validation and majority voting to improve the consistency of responses. 6 EVALUATION Our evaluation aims to address the following research questions. RQ1 (Precision): How accurately is LLift able to identify bugs? RQ2 (Recall): Is there a possibility for LLift to miss real bugs? RQ3 (Comparison): How does the performance of individual components within LLift compare to that of the final design? RQ4 (Model Versatility): How does LLift perform when ap- plied to LLMs other than GPT-4? | 2308.00245#33 | 2308.00245#35 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#35 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | â ¢ â ¢ We evaluate RQ1 to RQ3 in GPT-4, under API from OpenAI with version gpt4-0613. For RQ4, we also test GPT-3.5 with version gpt-3.5-turbo-0613 and Claude 2 additionally for comparison. 6.1 Dataset Our experiment data, sourced from UBITect, includes all potential bugs labeled by its static analysis stage but experienced timeout or memory exhaustion during its symbolic execution stage. Overall, UBITectâ s static analysis stage produced 140,000 potential bugs, with symbolic execution able to process only 60%, leaving 53,000 cases unattended, which means that these cases are generally diffi- cult for static analysis or symbolic execution to decide We craft the following dataset from 53,000 cases to evaluate LLift: (1) Random-1000. We randomly chose 1,000 from the 53,000 cases for testing. However, there are 182 cases where there are no initializers, which are automatically recognized and filtered (see §3). The remaining 818 cases are used in evaluating precision, i.e., the ratio of true positives to false positives. (2) Bug-50. This dataset comprises the 52 confirmed UBI bugs previously identified by UBITect. It is used as ground truth for assessing recall by verifying if any true bugs were overlooked. | 2308.00245#34 | 2308.00245#36 | 2308.00245 | [
"2305.10601"
]
|
2308.00245#36 | The Hitchhiker's Guide to Program Analysis: A Journey with Large Language Models | (3) Cmp-40. This dataset comprises 27 negative and 13 positive cases selected from the Random-1000. We utilize this dataset to illustrate which of our design strategies contributed most to the outcome of our solution. Turns and Conversations. Due to the progressive prompt, each case may require different turns (pairs of a prompt and a response). In Random-1000, the average number of turns is 2.78, with a max of 8 and a variance of 1.20. | 2308.00245#35 | 2308.00245#37 | 2308.00245 | [
"2305.10601"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.